zot
oh well. should we shoot the messenger?
ps ive checked s01e04 ctu on AW -everything is ok.
i wonder how long will it last7
zot
oh well. should we shoot the messenger?
ps ive checked s01e04 ctu on AW -everything is ok.
i wonder how long will it last7
Last edited by Hypatia; 11-12-2011 at 06:06 AM.
I don't think it means Readnews is getting DMCA notices, I think that older posts on Readnews are roughly mirroring the completion of Highwinds, because of the peculiar newsfeed arrangements that Readnews has set up.
All the articles that Blocknews does have for that post are fed to it by Highwinds (shown in Path header). When Highwinds was having completion issues not related to DMCA takedowns, many of the missing articles were also missing from Readnews servers. I don't know why Readnews is so dependent on one newsfeed from a single competitor for its older retention range.
I downloaded it with blocknews with us server as main and eu one as backup and came up only 26 blocks short of a repair. I am surprised the slack couldn't be taken up with a giganews account. I know I got these sometime ago and they were fine. Ahh well. They are several hundred days old so they become a huge target with each passing day. But I don't really know if it is dmca or not.
I'm a blocknews user since a year and im very pleased with their service as long we are talking about the fresh stuff...
But their claim "With our storage spool constantly growing, retention is now at 1,150+ days for binaries" on the website is bullshit. The oldest thing that i could still get was around 600 days old.
My opinion is that this is a great service if you dont want access to really old posts. Worth its money. But they really should correct their claims on the website.
Excellent point, but that appears to be an entirely different issue, as I doubt that this problem was from a server-propagation error.
When a pay-TV release is missing the first article of every rar/par file --as was the case here-- that would normally be a telltale sign of a DMCA takedown, which would happen a minimum of several hours (but more likely several days/weeks) AFTER the articles were already propagated downstream to the next NNTP server in the path (a process that happens immediately after posting). So therefore a takedown would only affect the servers of the company that received the notice.
I would assume that having the same random articles missing on two or more different servers in the propagation path would probably be from an upload or propagation error, but it would indeed be a strange coincidence if the missing articles in propagation just happened to fit the pattern of a DMCA takedown, as well as being a strange coincidence that the problem was only noticed long after the file was posted (and presumably downloaded many times).
It's not by any means absolute proof (and I hope I'm wrong about this) but it certainly seems to me more like a takedown, rather than a upload/propagation error. Time will tell.
Sorry, wrong assumptions and bad use of terminology on my part. In my own mental 'rule-of-thumb' I was more or less quickly "eyeballing" the numbers, and seeing about one missing segment in every 100MB file and seeing the number of par MB being roughly 10% of the release MB (which is not at all the same as the par set's percent redundancy) just *felt* like it would still work. Of course if I had not been not so lazy a better way would have been to actually count the blocks (assuming that the pars were even based on the same size as the rar articles).The main reason SuperNZB completion checker varies between runs is that it randomly shows missing parts in the wrong rows. Also, a timeout, when the Readnews server takes more than 10 seconds to send a response to the 'head' command, is counted as a missing part. Let's assume though that the total is correct and your stats are 1.0% missing parts and 10% PARS. You then assume that the files should repair quite easily on Blocknews alone. Do you understand why your assumption is way off the mark?
But this brings up another question: is there any reason (or advantage) why pars would be sized NOT to match the article's size setting?
That's why Readnews is peculiar; the servers work as expected for newer posts, but older retention is different. There's a longer response time after the article command is sent, and the article finder operation that's internal to Readnews might not even use the normal NNTP peering commands like IHAVE. They don't explain in detail what the setup is that provides the older retention but insist it's all legit with Highwinds.
There seems to be a poster requirement to include 10% pars, whether or not source block size matches article size. It's faster to create 10% pars if the source blocks are large, even though doing that renders the pars less efficient. Yes I know, it makes no sense but that's how it's done.
Well if this doesn't make it more confusing when I download episode 4 now it comes down fine and unpacks okay. Before it was missing 26 blocks. Maybe something is fixed....maybe something is screwing with us or just maybe the usenet gods don't like me but I swear it didn't work before. Lol
Bookmarks