I setup a test setup. 36 Gb of file sets on my server. The server is a linux box on the same network as my PC.
- Queued up 2 6 GB sets, a 4 GB set and a 10 GB set one right after the other.
- 10 connections to the server.
- Machine is a very powerful quad core, downloading to a 7200 RPM 2 TB WD Caviar black drive
Set #1 6 GB - needed an unrar.
Set #2 6 GB - needed a repair and an unrar.
set #3 4 GB - just download
set #4 10 DB - needed an unrar.
Set #1 downloaded, averaged 200 Mbps/ 24 MB/s
Set #1 started to unrar while set #2 started downloading. Set#2 downloaded at about 200 Mbp/s 24 MB/s. Speed dipped a little and the chunk cache dropped down to as low as 150.
Set #1 finished unrar
Set #2 finished and started to repair.
Set #3 Downloaded during the repair. Never went below 24 MB/s
Set #4 Started to download while Set #2 was still repairing. Set#3 downloaded at 24 MB/sec. If anything it dipped less than Set #2 probably be cause the files were 500 MB each.
Set #2 finished repair and started to unrar. Set #4 continued to download.
Set #2 finished the unrar.
Set #4 finished and unrared.
I don't doubt what you're seeing but, I'm pretty sure the reason you see it is that Newsbin is doing the par scan as each file downloads instead of later. Maybe it's something I can look at down the road. It's clear to me that your disk is the bottleneck, at least as far as Newsbin's current design is concerned, a different design might work better on a modestly powerful machine.
Bookmarks