Too Many Open Files
Linux Linux
Severity: ModerateWhat Does This Error Mean?
The 'Too many open files' error means a process has hit the limit on how many files it can have open at the same time. Linux tracks every open file, socket, and network connection using file descriptors, and every process has a limit. High-traffic servers, databases, and applications handling many connections hit this limit regularly.
Affected Models
- Ubuntu
- Debian
- Fedora
- CentOS
- Arch Linux
- openSUSE
Common Causes
- A service like a web server or database is handling more connections than the default limit allows
- A program has a file descriptor leak — it opens files without closing them, eventually exhausting the limit
- The ulimit setting for the user or service is set too low for the workload
- The system-wide file descriptor limit is too low for the number of processes running
- A crash or bug caused a process to accumulate many open file descriptors over time
How to Fix It
-
Check current limits. Run: ulimit -n to see the soft limit for open files. Run: ulimit -Hn to see the hard limit. The soft limit can be raised up to the hard limit without root.
The default soft limit on most Linux systems is 1024 open files per process. This is very low for modern applications.
-
Temporarily increase the limit. Run: ulimit -n 65536 in the same shell session before starting the affected program. This raises the limit to 65536 for the current session only.
This change is temporary and resets when you open a new terminal or restart.
-
Make the limit permanent for a user. Edit /etc/security/limits.conf and add: [username] soft nofile 65536 and [username] hard nofile 65536. Log out and back in for it to take effect.
Replace [username] with the actual username, or use * to apply to all users.
-
Increase the limit for a systemd service. Edit the service's unit file and add under [Service]: LimitNOFILE=65536. Then: sudo systemctl daemon-reload && sudo systemctl restart [service-name]
Systemd services have their own file descriptor limits separate from the system defaults. This is the correct way to fix it for services like Nginx, MySQL, or Elasticsearch.
-
Check the system-wide limit. Run: sysctl fs.file-max to see the total file descriptors available system-wide. Increase it: sudo sysctl -w fs.file-max=2097152. Make it permanent: add fs.file-max=2097152 to /etc/sysctl.conf
The system-wide limit is separate from per-process limits. On busy servers with many services, both may need to be raised.
When to Call a Professional
This error is fixable by adjusting system limits. For production servers experiencing this under load, a system administrator should tune the limits appropriately. For home or development use, the quick fix below is sufficient.
Frequently Asked Questions
What is a file descriptor?
A file descriptor is a number that Linux uses to track an open file, socket, or pipe. When a program opens a file, Linux gives it a file descriptor number. Every open network connection also uses a file descriptor. The limit on file descriptors is therefore a limit on how many files and connections a process can have open at once.
Is it safe to raise the file descriptor limit to a very high number?
Yes — raising the limit does not cause problems by itself. Each open file descriptor uses a small amount of kernel memory, so setting the limit to 1,000,000 does not use that memory unless 1,000,000 files are actually open. For modern servers with gigabytes of RAM, setting the limit to 65536 or even 1,000,000 is safe and common.
How do I find which process is using the most file descriptors?
Run: sudo lsof | awk '{print $2}' | sort | uniq -c | sort -rn | head -20. This counts open file descriptors per process ID (PID). Then: ps aux | grep [PID] to identify which program has that PID.