Delete certain files after you burn them

From VuzeWiki
Jump to: navigation, search

The scenario:

You've found a 30gig torrent consisting of 700meg files which you want to burn to VCD, etc. The problem: You only have two gigs or so of disk space available, so you want to download one file at a time, burn it to VCD/CD, delete it, and then continue on to the next.

You've already read up on how to download selectively, so that part's easy, and from reading that you realize you're going to need space for at least two of the files at all times. But when you try to delete (in an intelligent manner, meaning: stop the torrent first) the file(s) that you've already burned, you get a "Data Error: (some file) missing". (Or worse!) The only way to recover from this simplest error is to remove the torrent, restart Azureus, and re-add the torrent after restart. Of course you'll lose your share ratio data and such. All of this is not such a big deal if you're downloading just the one torrent, but if you have many others active, you're not going to want to shut down the client altogether just to free up your 700 megs. If you do that you'll of course lose all your current peer/seed connections, and who knows how they'll react when you reconnect again. Best case you lose a whole bunch of downloading time.

The solution:

  • Stop the specific torrent within Azureus
  • Make a note of the exact file-name you wish to delete (or copy to clipboard)
  • Delete the file
  • Create a very small file with the exact same name as the one you deleted, in the exact same location. Personally, I find it easiest to make a copy of any .nfo files and then rename them, although you can simply use Notepad, 'touch', etc. to create an empty file named the same.
  • Context click on the torrent and select "Force Re-check" (Is this necessary?)
  • After a few moments, stop it again... it'll re-check anyway when you resume it.
  • Resume the whole torrent. It should re-check the whole torrent, regardless of your "Fast Resume" setting. And this time you will not get the missing data file error. After the hash check it will continue downloading your chosen file(s) as though nothing has happened.


Of course there is no way to download/burn individual files without wasting some space for the previous file too, as described in the link above. (Don't delete the file immediately before the one you want to download, or you'll just have to download parts of it again AND waste the disk space required for the entire previous file.) So, it's not perfect, but this method will allow you to scrape by with a minimum of disk space useage. BTW: You'll never become an actual "seed", but in essence you're doing the exact same thing on a per-file basis as long as you leave your connection up. Please do so. ;)