Entropy is a technical term for "Randomness". Computers don't really generate entropy but gather it by looking at stuff like the variations of hard drive rotation speeds (A physical phenomena that is very hard to predict due to friction etc.) When a computer wants to generate a pseudo random data it will seed a mathmatical formula with true entropy that it found by measuring mouseclicks, hard drive spin variations etc. Roughly speaking
entropy_avail is the measure of bits currently available to be read from
It takes time for the computer to read entropy from its environment unless it has cool hardware like a noisy diode or something.
If you have 4096 bits of entropy available and you cat
/dev/random you can expect to be able to read 512 bytes of entropy (4096 bits) before the file blocks while it waits for more entropy.
For example if you “
cat /dev/random” your entropy will shrink to zero. At first you'll get 512 bytes of random garbage but it will stop and little by little you'll see more random data trickle trough.
This is not how people should operate
/dev/random though. Normally developers will read a small amount of data, like 128 bits, and use that to seed some kind of PRNG algorithm. It's polite to not read any more entropy from
/dev/random than you need to since takes so long to build up and is considered valuable. Thus if you drain it by carelessly
catting the file like above you'll cause other applications that need to read from
/dev/randomto block. On one system at work we noticed that a lot of crypto functions were stalling out. We discovered that a cron job was calling a python script that kept initializing
ramdom.random() on each run which which ran every few seconds. To fix this we rewrote the python script so that it ran as a daemon that initialized only once and the cron job would read data via XMLRPC so that it wouldn't keep reading from
/dev/random on startup.