PDA

View Full Version : Determine 'Day Zero' For Usenet Server(s)



Beck38
02-23-2010, 02:50 AM
This is along the lines of, lack of, things 'rolling off' usenet servers....

But the 'question of the day/millennium', is what exactly IS the furthest back any particular server retention is...

I've d/l'ed several things over 500 days (535 to be exact) off Astraweb, and that date was around 16Aug08 or so.

I'll try and pin that down a bit more 'exact', but perhaps others (on different suppliers/servers) can do the same.

Also, once that 'day zero' has been determined, figure out if it is 'moving', i.e., anything is 'rolling off' the server.

Would be 'nice to know'...!

ericab
02-23-2010, 04:18 AM
keep us informed beck ! would be good to know.

Rart
02-23-2010, 01:31 PM
I'm guessing though that a "day zero" would be different for every usenet provider (unless ofc they're resellers). Are we mainly just talking about astraweb here? Or giganews? Or any other provider?

Beck38
02-23-2010, 02:45 PM
I, like many others, moved my 'opns' over to Astraweb from a decade of Giganews a year ago... so all I can determine is Astra now. But those using others should be able to to the same.

taniquetil
02-23-2010, 03:23 PM
I'm currently using Binverse, but to be perfectly honest, they don't have the best retention for some more obscure stuff, even if it's only about 300 days old.

MultiForce
02-23-2010, 06:48 PM
Go a movie on download now that is 558.9 days old :P That's GN though

Needs some older NZBs I guess.... Only got some 560 days old stuff :/

Beck38
02-24-2010, 12:31 AM
Astraweb peters out around 10Aug08...

that makes it approx. 541 days or so. I'll keep an eye on that for the next week or so (or more) and see it 'moves'

If GN is 'later', what date does that come out to?

Comment: In doing this, I took a hard look at the overall 'traffic' on usenet, which I hadn't done for quite a few months.

YIKES!?!

For any server plant to keep up, it takes several terabyte drives EACH DAY. If I'm to believe a couple of the sites that track such things, it's over 8TB/day now, up from around 5TB that it was a year or so ago. So, almost double.

Beck38
03-01-2010, 10:44 PM
Interesting...

What I've found/figured out, is that despite most/all of the providers saying that *ALL* the newsgroups are 'up to x days of retention, in fact it varies group to group.

YES, even on Giganews! Unless my figuring is out of wack... which may be the case, as I'm d/l'ing something right now that's 'right on the edge' of the retention for a particular group. But that is far in advance of the group I was using to figure out Astrawebs retention.

Really unknown. Will take a bit of time to figure out what's going on.

JustDOSE
03-02-2010, 12:01 PM
you cant pin down an exact point of which all files will be complete, servers are constantly increasing their retention and you could download a complete 500 day old file but there's a file that was complete when it was posted but may not be complete at 300 days.

That is why alot of people have a solid priced host like astraweb/supernews. and have a 5$ giganews block account to use as a fill server

usenet is not perfect but it works pretty fckn well.

here is one single 15mb file separated into 35 pieces and scattered across who knows how many servers: So one server crashes or w/e and there goes a piece of the file, it happens to par files too, so the older a files gets the more likely its host server crashed, or some one of a hundreds things could have happened that cause that small piece of info to get corrupted, deleted, or w/e.


<file poster="Anonymous &lt;Anonymous@[email protected]&gt;" date="1267452363" subject="Catch.Me.If.You.Can.2002.DVDRip.XviD-ADDICTION &quot;Catch.Me.If.You.Can.2002.DVDRip.XviD-ADDICTION.part14.rar&quot; yEnc 15728640 Bytes (1/35)">
<groups>
<group>alt.binaries.boneless</group>
</groups>
<segments>
<segment bytes="482944" number="1">[email protected]</segment>
<segment bytes="482664" number="2">[email protected]</segment>
<segment bytes="478978" number="4">[email protected]</segment>
<segment bytes="482777" number="5">[email protected]</segment>
<segment bytes="481915" number="8">[email protected]</segment>
<segment bytes="482984" number="6">[email protected]</segment>
<segment bytes="481557" number="10">[email protected]</segment>
<segment bytes="481851" number="14">[email protected]</segment>
<segment bytes="481843" number="15">[email protected]</segment>
<segment bytes="482025" number="16">[email protected]</segment>
<segment bytes="482098" number="19">[email protected]</segment>
<segment bytes="482117" number="21">[email protected]</segment>
<segment bytes="482642" number="26">[email protected]</segment>
<segment bytes="482242" number="22">[email protected]</segment>
<segment bytes="482297" number="25">[email protected]</segment>
<segment bytes="482567" number="27">[email protected]</segment>
<segment bytes="65355" number="35">[email protected]</segment>
<segment bytes="483000" number="28">[email protected]</segment>
<segment bytes="482079" number="30">[email protected]</segment>
<segment bytes="482103" number="29">[email protected]</segment>
<segment bytes="481822" number="18">[email protected]</segment>
<segment bytes="482728" number="31">[email protected]</segment>
<segment bytes="481903" number="12">[email protected]</segment>
<segment bytes="482488" number="33">[email protected]</segment>
<segment bytes="482711" number="7">[email protected]</segment>
<segment bytes="482155" number="20">[email protected]</segment>
<segment bytes="482313" number="13">[email protected]</segment>
<segment bytes="482213" number="3">[email protected]</segment>
<segment bytes="478386" number="24">[email protected]</segment>
<segment bytes="481595" number="9">[email protected]</segment>
<segment bytes="482618" number="17">[email protected]</segment>
<segment bytes="478847" number="11">[email protected]</segment>
<segment bytes="482086" number="23">[email protected]</segment>
<segment bytes="482353" number="34">[email protected]</segment>
<segment bytes="482237" number="32">[email protected]</segment>
</segments>
</file>

Beck38
03-02-2010, 10:12 PM
A couple things:

1. IF a server stops 'rolling off' any content (by adding more capacity), AND IF they don't roll off by groups (which is up for grabs now) and despite GN/Astra and others saying they don't (all groups treated equally), then that is the day zero. Virtually all server plants today don't have multiple server(s) like in the olden days (5+ years ago), it's all one server plant.

2. Giganews hasn't had 'block' accounts for many, many, years now. The cheapest account with decent access is their 'bronze' package, which allows access to the full database, at $8/month for 10GB transfers.

I'm tracking down both Astra and GN right now, and it appears that both of them have huge gaps in retention that are group dependent, in that at first glance, they seem to have xxx days retention (anything further back seems to be off the reservation), then I go WAY WAY back and...

voila! All of a sudden something some hundred days or so even further is available. Got my really scratching my head. I don't know WHAT they're doing. It may be that they're moving things around, trying to get their plants more efficient or something, and for a few days these large gaps occur while they're moving stuff around.

Who knows?!? I don't work there, and it's been years since I've been in any large plant array, so.... It's a bit bizarre. Suffice it to say that the 570+ day (back to around Aug08, maybe even July08) retention is semi-correct, just simply/maybe not on any given day. Or is. ;)

JustDOSE
03-08-2010, 07:47 PM
beck stfu .. ... muthafucka