The following script prints time stamps and a header.

#!/bin/bash -e

echo "      date     time $(free -m | grep total | sed -E 's/^    (.*)/\1/g')"
while true; do
    echo "$(date '+%Y-%m-%d %H:%M:%S') $(free -m | grep Mem: | sed 's/Mem://g')"
    sleep 1
done

The output looks like this (tested on Ubuntu 15.04, 64-bit).

      date     time          total       used       free     shared    buffers     cached
2015-08-01 13:57:27          24002      13283      10718        522        693       2308
2015-08-01 13:57:28          24002      13321      10680        522        693       2308
2015-08-01 13:57:29          24002      13355      10646        522        693       2308
2015-08-01 13:57:30          24002      13353      10648        522        693       2308
Answer from klaus se on Stack Overflow
🌐
GitHub
github.com › azlux › log2ram
GitHub - azlux/log2ram: ramlog like for systemd (Put log into a ram folder) · GitHub
You need to stop Log2Ram (systemctl stop log2ram) and execute the installation process. If you used APT, this will be done automatically. In the file /etc/log2ram.conf, there are nine variables: SIZE: defines the size the log folder will reserve ...
Starred by 3K users
Forked by 198 users
Languages   Shell
Discussions

Log memory usage for every running process?
https://www.linuxatemyram.com/ It's a very common question so people made a little fun page about it. Unix and there by Linux thinks RAM is good and if it has RAM it uses it when-ever it can. Only when there's no RAM is it time to think about "conserving" and hence not allow everything that could use RAM to get some. One part of that is buffers/cache. When Linux was born - particular when Unix was born - disks were very slow. We're talking literately seconds to read a single sector. So one of the ways things were sped up was to cache important parts of the disk structure in memory. So it was only slow the first time you read a directory - the second time it just grabs the data from memory. This is called buffers - and Linux will grab what-ever it can for buffers - UNTIL a program needs the RAM. Then it releases it and gives it to the program. It's nice like that :D So "free" needs to be taken with a grain of salt - at least if you want to use it to understand how big a program you can load. $ free total used free shared buff/cache available Mem: 64955636 51410160 496036 1864 13049440 13006536 Swap: 11722748 66816 11655932 This server has been running for 78 days - as you can see, it's using ALL it's RAM. And that's good, right? Why having it sit unused? What's important is that "free" column is a mis-nomer - it's not what's available to your programs. It's memory the kernel knows about but hasn't yet figured out how to use. It will eventually, depending of activity, use it ALL - and why not? These numbers is the kernel telling you how your system is doing overall. It's not an indication of how much memory is available for your next program to load. You can basically look at the buffer/cache as memory your programs can use. More on reddit.com
🌐 r/linuxquestions
11
1
February 23, 2021
monitoring - Is there a tool that allows logging of memory usage? - Unix & Linux Stack Exchange
I want to monitor memory usage of a process, and I want this data to be logged. Does such a tool exist? More on unix.stackexchange.com
🌐 unix.stackexchange.com
January 13, 2011
Move your logs and temp files to RAM and watch your portable fly!
If your log files are slowing your system you have more serious problems. More on reddit.com
🌐 r/linux
93
182
July 31, 2010
mount /var/log as tmpfs in Linux - Unix & Linux Stack Exchange
My logs grow at a rate of a few GB per hour. ... But in this case ram disk wont save you ! ... The other answer here is correct: You don't need to / want to do this for any sane circumstances. Doing so has various drawbacks and usually zero benefits. If your Linux system is on a hard drive ... More on unix.stackexchange.com
🌐 unix.stackexchange.com
🌐
Linux.com
linux.com › home › news › improve system performance by moving your log files to ram
Improve system performance by moving your log files to RAM - Linux.com
July 16, 2008 - Author: Ben Martin The Ramlog project lets you keep your system logs in RAM while your machine is running and copies them to disk when you shut down. If you are running a laptop or mobile device with syslog enabled, Ramlog might help you increase your battery life or the life of the flash drive …
🌐
Linux Magazine
linux-magazine.com › Online › Blogs › Productivity-Sauce › Store-Logs-in-RAM-with-LogRunner
Store Logs in RAM with LogRunner » Linux Magazine
When running, LogRunner creates a RAM disk and copies all log files onto it. The clever part is that the utility has a backup function that helps to keep RAM usage below a specified limit (16MB by default).
🌐
UbuntuPIT
ubuntupit.com › home › tutorials › how to write log files in ram using log2ram in linux
How To Write Log Files in RAM Using Log2ram in Linux
October 31, 2025 - Log2ram is the process of writing files in Ram instead of device storage. In this post, we will learn how To write log files In RAM using Log2ram In Linux
🌐
Ubuntu Geek
ubuntugeek.com › improve-system-performance-by-moving-your-log-files-to-ram-using-ramlog.html
Improve system performance by moving your log files to RAM Using Ramlog | Ubuntu Geek
If kernel ramdisk is used, ramdisk created in /dev/ram9 and it is mounted to /var/log, by default ramlog takes all ramdisk memory specified by kernel argument "ramdisk_size".
🌐
IT'S FOSS
itsfoss.gitlab.io › post › how-to-write-log-files-in-ram-using-log2ram-in-linux
How To Write Log Files In RAM Using Log2ram In Linux :: IT'S FOSS
August 16, 2025 - Guide to Log2ram in Linux Write log files to RAM for faster access reduced disk wear. Learn how to set it up.
Find elsewhere
🌐
Ubuntu Forums
ubuntuforums.org › showthread.php
[ubuntu] Logging to ram disk
October 30, 2009 - Look up tmpfs or ramfs. For a HowTo on that, see the tail end of this one for tmpfs: http://www.howtoforge.com/storing-fi...ory-with-tmpfs When you can mount/unmount a tmpfs partition on a mount point. Then you might risk mounting /var/log as a tmpfs partition. Remember that /var/log/ must be populated with directories and files and that they must have the correct permissions... For a brief comparison, see: http://www.thegeekstuff.com/2008/11/...mpfs-on-linux/
🌐
Linuxfun
linuxfun.org › en › 2021 › 01 › 01 › what-log2ram-does-en
How log2ram works. | The World's Linux Journal
December 11, 2021 - This article introduces log2ram that will help you to move /var/log/ from microSD/HDD to RAM!With this package the lifetime of microSD will be longer than now with ease!
🌐
Reddit
reddit.com › r/linuxquestions › log memory usage for every running process?
r/linuxquestions on Reddit: Log memory usage for every running process?
February 23, 2021 -

Sorry, wasn't sure how to title this as it's a fairly odd problem. tl;dr of what I'm looking for at the end.

I'm running Arch w/ KDE. I've noticed in the last few months that something, after 1-3 days of uptime, will gradually make my idle memory usage go from 2-3GB (same amount used just after login on fresh boot) to almost my full 24GB (this being of the course of the 1-3 days of uptime). The annoying part is that it never shows in htop or any similar programs; the highest memory usage is always Discord at "only" 800MB or so.

Before everyone yells at me...no, it is not cache. Free -m shows almost all my RAM as "used" and not, say, 3GB used and 21GB cache (I can't post output as I'm typing this immediately after reboot). And as I said in the previous paragraph, it doesn't appear to be a program as nothing shows it (htop doesn't, ksysguard doesn't, bashtop doesn't, etc). htop also shows it is not cache using my RAM, and if all of those were somehow lying, then the awful performance after a couple days clearly shows that it's not cache clogging my RAM.

I'm extremely frustrated and completely out of ideas so I'm hoping there is a way to log literally anything that even breathes on my RAM so that I can comb through that data and find the problem. Are there any programs or scripts that can do this that don't just write the output of free -m to a text file every 60 seconds or whatever? Thanks!

tl;dr looking for program that logs every single interaction anything in my system has with my RAM.

Top answer
1 of 3
1
https://www.linuxatemyram.com/ It's a very common question so people made a little fun page about it. Unix and there by Linux thinks RAM is good and if it has RAM it uses it when-ever it can. Only when there's no RAM is it time to think about "conserving" and hence not allow everything that could use RAM to get some. One part of that is buffers/cache. When Linux was born - particular when Unix was born - disks were very slow. We're talking literately seconds to read a single sector. So one of the ways things were sped up was to cache important parts of the disk structure in memory. So it was only slow the first time you read a directory - the second time it just grabs the data from memory. This is called buffers - and Linux will grab what-ever it can for buffers - UNTIL a program needs the RAM. Then it releases it and gives it to the program. It's nice like that :D So "free" needs to be taken with a grain of salt - at least if you want to use it to understand how big a program you can load. $ free total used free shared buff/cache available Mem: 64955636 51410160 496036 1864 13049440 13006536 Swap: 11722748 66816 11655932 This server has been running for 78 days - as you can see, it's using ALL it's RAM. And that's good, right? Why having it sit unused? What's important is that "free" column is a mis-nomer - it's not what's available to your programs. It's memory the kernel knows about but hasn't yet figured out how to use. It will eventually, depending of activity, use it ALL - and why not? These numbers is the kernel telling you how your system is doing overall. It's not an indication of how much memory is available for your next program to load. You can basically look at the buffer/cache as memory your programs can use.
2 of 3
1
Do you use Chrome?
🌐
Baeldung
baeldung.com › home › administration › log the memory consumption on linux
Log the Memory Consumption on Linux | Baeldung on Linux
March 18, 2024 - It also includes an option to print the usage statistics continuously. Unlike the other two commands we saw earlier, this prints more information about memory usage. ... $ sar -o memory.log -r 1 3 --human Linux 4.15.0-76-generic (slave-node) 04/21/22 _x86_64_ (4 CPU) 15:00:23 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 15:00:24 2.7G 4.4G 5.0G 65.2% 779.4M 1012.5M 7.5G 97.4% 3.8G 605.7M 84.0k 15:00:25 2.7G 4.4G 5.0G 65.2% 779.4M 1012.5M 7.5G 97.4% 3.8G 605.8M 120.0k 15:00:26 2.7G 4.4G 5.0G 65.2% 779.4M 1012.6M 7.5G 97.4% 3.8G 605.9M 156.0k Average: 2.7G 4.4G 5.0G 65.2% 779.4M 1012.5M 7.5G 97.4% 3.8G 605.8M 120.0k
🌐
MakeUseOf
makeuseof.com › home › linux › how to use log2ram on linux to save wear and tear on your disks
How to Use Log2Ram on Linux to Save Wear and Tear on Your Disks
October 14, 2022 - With Log2Ram installed on your Linux machine, logs aren't written directly to your disk, instead, as the name suggests, they're written to RAM.
🌐
Reddit
reddit.com › r/linux › move your logs and temp files to ram and watch your portable fly!
r/linux on Reddit: Move your logs and temp files to RAM and watch your portable fly!
July 31, 2010 - This is true by policy on most GNU/Linux systems. But on my netbook, which is basically a web browsing appliance, I have chosen to trade away persistence of /var/log and /var/tmp in order to reduce disk writes. YMMV, but my subjective and non-scientific experience is "yes, it does 'feel' faster". More replies ... If you move them to ram, then you have less ram for apps/applications, and will end up in swap that much sooner.
Top answer
1 of 2
8

Technically, you can mount /var/log as tmpfs. You'd need to be sure that /var/log is mounted before syslogd starts, but that's the case by default on most distributions since they support /var on a separate partition.

You'll obviously lose all logs, which I guarantee will be a problem one day. Logs are there for a purpose — there're rarely needed, but they're there when they're needed. For example, if your system crashes, what was it doing before the crash? Since when has this package been incstalled? When did I print this document? etc.

You won't gain much disk space: logs don't take much space relative to a hard disk. Check how much space they use on your system; I'd expect something like 0.1% of the disk size.

You won't gain any performance. Logs amount to a negligible part of disk bandwidth on a normal desktop-type configuration.

The only gain would be to allow the disk to stay down, rather than spin up all the time to write new log entries. Spinning the disk down doesn't save much electricity if any: the hard disk is only a small part of a laptop power consumption, and spinning up requires a power surge. Furthermore spin cycles wear down the disk, so don't spin down too often. The main reason to spin down is the noise.

Rather than putting logs on tmpfs, arrange for your disk not to spin up when a file is written. Install Laptop Mode, which causes writes to disk to be suspended while the disk is spun down — only a full write buffer, an explicit sync or a disk read will spin the disk back up.

Depending on your configuration, you may need to instruct the syslog daemon not to call sync after each write. With the traditional syslog daemon, make sure that all file names in /etc/syslog.conf have - before them, e.g.

auth,authpriv.*         -/var/log/auth.log

With rsyslog, also make sure that log file names have - before them; the log files are configured in /etc/rsyslog.conf and /etc/rsyslog.d/*.

2 of 2
2

The other answer here is correct: You don't need to / want to do this for any sane circumstances. Doing so has various drawbacks and usually zero benefits. If your Linux system is on a hard drive (whether SSD or spinning), the simple answer is: You don't need this. If your system is spamming errors, this is still not a solution; it means you have a serious problem that needs investigation ASAP.

My own circumstances aren't sane. I'm running a toy project where I've installed a Linux OS on an ancient 8GB USB with far less than 1MB/s write speed, but on a computer with a good amount of RAM. It's not a production system. My system keeps hard-freezing for seconds at a time because of too many concurrent processes trying to write to my very slow drive. By sending all my cache and tmp dirs to RAM, I get a responsive system.

Here's the answer to your question:

  • Edit /etc/fstab as root.
  • Add something like this:
tmpfs /var/tmp tmpfs defaults,mode=1777,size=256M 0 0
tmpfs /var/log tmpfs defaults,mode=1775,size=512M 0 0

I've added ,size=256M and ,size=512M to limit their respective sizes, you may remove these if you want to them to consume up to 50% of your RAM.

🌐
UMA Technology
umatechnology.org › home › how to use log2ram on linux to save wear and tear on your disks
How to Use Log2Ram on Linux to Save Wear and Tear on Your Disks - UMA Technology
December 22, 2024 - Log2Ram is a small shell script for Linux systems that redirects log data to a temporary file system in RAM. This means that instead of writing log files to the hard drive, the system will write them to a RAM disk instead.
🌐
Readthedocs
netmodule-linux.readthedocs.io › en › latest › howto › linux-system-logging.html
Linux System Logging — NetModule OEM Linux Distribution 1.0.0 documentation
There are two modes about how logs can be stored: - volatile = logs are stored to RAM - persistent = logs are stored to flash
🌐
ManageEngine
manageengine.com › home › logging guide › how to analyze linux memory problems
How to analyze Linux memory problems?
When a critical process is to be initiated and it requires more memory than what's available, the kernel starts killing processes, and records these events with strings such as "Out of Memory" in the log data.