High CPU Usage

From VuzeWiki
Jump to: navigation, search

General advice on reducing CPU usage is available here.

High CPU usage withing the Vuze process can have many causes:

Normal Situations[edit]

  • Hash Checking
When pieces of a file are downloaded they are checked for correctness by calculating their unique hash. This takes a small amount of CPU per piece and if you are downloading quickly this can add up. This process also occurs when you manually recheck a file via the 'force recheck' menu option
  • Transcoding
Converting video file formats is CPU intensive, so when you are doing this you will see high CPU usage. Note that in this case the CPU usage is registered against the converter application rather than the Vuze process.
  • Cryptography
In general cryptography (encryption, decryption) is resource intensive. If you install the I2P plugin then you will see elevated CPU usage as a result of its continual cryptographic operations.

Abnormal Vuze Causes[edit]

  • Memory Exhaustion
Vuze runs within a Java virtual machine (JVM) - this uses a garbage collected 'heap' to store data. The size of the heap is constrained, more information is available here. If you have a lot of active torrents the heap can become full. As this happens the JVM spends more and more effort trying to make space in the heap and this in turn causes increased CPU usage.
  • Bug
Occasionally a bug within Vuze might cause a thread to loop (incorrect locking of access to a HashSet would be a good example of a rarely triggered situation)

Abnormal External Causes[edit]

  • Third party applications such as virus scanners often inject their components into the Vuze process in order to monitor network traffic. If there is a bug in these components, or they have performance issues, this will appear to be an issue with the Vuze process itself.
  • ESET Smart Security version 8
See Known Issues

Diagnosis[edit]

Vuze monitors the various threads within the process and logs details of the most active thread to rotating log files named thread_1.log and thread_2.log in the 'logs' folder of the configuration folder. An example line from these files is

[21:36:13] Thread state: elapsed=10001,cpu=2790,max=PRUDPPacketHandler:sender(1404/14%),mem:max=253440,tot=66640,free=36124

The first 'max=' entry shows the thread name most active in the last 10 seconds (in this case 'PRUDPPacketHandler:sender') followed by the amount of CPU used (14%). Following this is the current JVM memory situation. The 'max' value here (253440) gives the maximum size the heap can grow to in KB. The 'tot' value gives the current total size of the heap (66640) in KB, so in this case around 250MB is available but only 66MB is in use. The 'free' value indicates how much of the 'tot' allocated value is actually free for use.

If you have a 'tot' value that has grown to around 'max', and a small 'free' value then you are running low on heap memory.

The above example was from a user with the ESET induced CPU usage issue.

If you are running I2P then you will see threads with names like 'YK Precalc' and 'Job Queue n/m' as active, this is normal.

If a thread is using a very large amount of CPU (in the case of a bug perhaps) then the log file will also contain a stack trace of the offending thread. This can be very useful to the developers to please report this on the forums if you see it.