The following script prints time stamps and a header.
#!/bin/bash -e
echo " date time $(free -m | grep total | sed -E 's/^ (.*)/\1/g')"
while true; do
echo "$(date '+%Y-%m-%d %H:%M:%S') $(free -m | grep Mem: | sed 's/Mem://g')"
sleep 1
done
The output looks like this (tested on Ubuntu 15.04, 64-bit).
date time total used free shared buffers cached
2015-08-01 13:57:27 24002 13283 10718 522 693 2308
2015-08-01 13:57:28 24002 13321 10680 522 693 2308
2015-08-01 13:57:29 24002 13355 10646 522 693 2308
2015-08-01 13:57:30 24002 13353 10648 522 693 2308
A small script like
rm memory.log
while true; do free >> memory.log; sleep 1; done
Occasionally when the need arises I just do:
$ top -d 1 -b |grep <process> >>somefile
It's not an elegant solution, but gets the job done if you want the quick crude value to verify your hypothesis.
I have written a script to do exactly this.
It basically samples ps at specific intervals, to build up a profile of a particular process. The process can be launched by the monitoring tool itself, or it can be an independent process (specified by pid or command pattern).
You can use the following command in bash like so:
for i in `seq 0 60`; do
echo `cat /proc/meminfo | grep Active: | sed 's/Active: //g'` >> usage.txt
sleep 1m
done
This command will record the current memory use to a file named 'usage.txt' every minute for the duration of 1 hour.
If you wish, you can change the usage.txt part of the command to save under a different name. You can also change the sleep 1m command to alter the time between each entry and the '60' in the seq section at the top to change the number of entries to be recorded.
When you have finished making your entries, you will have a text file of entries that can be imported into a spreadsheet for easy comparison.
EDIT: If you wish to also record the total memory with each entry, you can use the following commands:
for i in `seq 0 60`; do
echo `cat /proc/meminfo | grep Active: | sed 's/Active: //g'`/`cat /proc/meminfo | grep MemTotal: | sed 's/MemTotal: //g'` >> usage.txt
sleep 1m
done
These commands will instead record entries in the form of <active>/<total>
sysstat does exactly that -- runs on a cron schedule and records various system metrics (CPU, RAM, block device usage and so on). Basically you apt-get install sysstat and forget about it. By default it keeps the metrics for the last month.
Later when need to disgnose an issue, you can use its CLI, sar, to browser the data or an 3rd party GUI, ksar for visualisation.
Technically, you can mount /var/log as tmpfs. You'd need to be sure that /var/log is mounted before syslogd starts, but that's the case by default on most distributions since they support /var on a separate partition.
You'll obviously lose all logs, which I guarantee will be a problem one day. Logs are there for a purpose — there're rarely needed, but they're there when they're needed. For example, if your system crashes, what was it doing before the crash? Since when has this package been incstalled? When did I print this document? etc.
You won't gain much disk space: logs don't take much space relative to a hard disk. Check how much space they use on your system; I'd expect something like 0.1% of the disk size.
You won't gain any performance. Logs amount to a negligible part of disk bandwidth on a normal desktop-type configuration.
The only gain would be to allow the disk to stay down, rather than spin up all the time to write new log entries. Spinning the disk down doesn't save much electricity if any: the hard disk is only a small part of a laptop power consumption, and spinning up requires a power surge. Furthermore spin cycles wear down the disk, so don't spin down too often. The main reason to spin down is the noise.
Rather than putting logs on tmpfs, arrange for your disk not to spin up when a file is written. Install Laptop Mode, which causes writes to disk to be suspended while the disk is spun down — only a full write buffer, an explicit sync or a disk read will spin the disk back up.
Depending on your configuration, you may need to instruct the syslog daemon not to call sync after each write. With the traditional syslog daemon, make sure that all file names in /etc/syslog.conf have - before them, e.g.
auth,authpriv.* -/var/log/auth.log
With rsyslog, also make sure that log file names have - before them; the log files are configured in /etc/rsyslog.conf and /etc/rsyslog.d/*.
The other answer here is correct: You don't need to / want to do this for any sane circumstances. Doing so has various drawbacks and usually zero benefits. If your Linux system is on a hard drive (whether SSD or spinning), the simple answer is: You don't need this. If your system is spamming errors, this is still not a solution; it means you have a serious problem that needs investigation ASAP.
My own circumstances aren't sane. I'm running a toy project where I've installed a Linux OS on an ancient 8GB USB with far less than 1MB/s write speed, but on a computer with a good amount of RAM. It's not a production system. My system keeps hard-freezing for seconds at a time because of too many concurrent processes trying to write to my very slow drive. By sending all my cache and tmp dirs to RAM, I get a responsive system.
Here's the answer to your question:
- Edit
/etc/fstabas root. - Add something like this:
tmpfs /var/tmp tmpfs defaults,mode=1777,size=256M 0 0
tmpfs /var/log tmpfs defaults,mode=1775,size=512M 0 0
I've added ,size=256M and ,size=512M to limit their respective sizes, you may remove these if you want to them to consume up to 50% of your RAM.
CPU utilisation is easy:
From the command line:
while ( sleep 10 ) ; do cat /proc/loadavg >> mylogfile ; done
The sleep command will sleep 10 seconds and than return with the return value 0 (aka success). We abuse that to get a compact while( true ) sleep 10.
/proc/loadavg contains the load avarages of now, over the last 5 minutes, and over the last 15 minutes. If you are logging every 10 seconds then you are only interested in the first value.
Or in a script (using bash).
#!/bin/sh # Using /bin/sh which is guaranteed to be present on any posix system. # If you want to add shell specific parts in the script than replace this. # E.g. if you want to use bash specific stuff then change it to: # #!/usr/bin/env bash # Make sure that the shebang is on the first line of the script (no comments above it!) # While true, pause 10 seconds, then append information to Mylogfile # while ( sleep 10 ) ; do cat /proc/loadavg >> mylogfile ; done
We can add a cat /proc/meminfo to the information we append to the log file. /proc/meminfo is quite extensive and it will log a lot. If you only want to filter on specific memory information then please add that to the post.
The simplest form of that would result in:
while (sleep 10) ; do cat /proc/loadavg /proc/meminfo >> mylogfile ; done).
If you run atop as a daemon, it will log a huge amount of system state data: CPU usage, process list, disk I/O, memory usage and more. You can then step through the data with atop -r [filename].
I am trying to reduce the log writes to consumer SSDs, any previous experience or recommendation on which tool to use for that (log2ram or folder2ram) that works fine with Debian 12 and what folders have you moved to ram? Thanks