PDA

View Full Version : Skips/Fades, Sometimes Why Unknown, Sometimes Really Obvious Why



Beck38
09-13-2011, 11:59 PM
I watch my uploads pretty closely, and when it looks like something is going 'off the rails, I pull out all the stops to try and figure out why.

A lot of the time, it's simply unknown, especially when the propagation appears to be 100% (at least one server other than the one I'm posting at is getting the stream perfectly), but others seem to be having major trouble.

Today is a good example. Giganews/US is getting the propagated parts without any problems whatsoever, but Giganews/EU was skipping like crazy. After watching that for a good 6 hours, I took a look at the propagation and found that Giga/EU had changed it's feeds around, taking from an outfit called 'sonic.net, 'alt.net' and 'xlned.com.. Why they would change things around I have no idea, especially as Giganews/US hadn't changed at all.

So I checked around, and Blocknews had changed it's feeds around as well. Major skips.

Now, usually some minor stuff I wouldn't mention it, but why the folks running these plants change their feed/peering arrangement around is a bit beyond me. What they'll be left with is huge gaps in their plant they'll be fighting for days/weeks/months if they don't figure it out quick.

BTW, neither Astra/US, Astra/EU have changed their feeds, and no problems, just like Giga/US.

'If it isn't broke, why fix it?'

Hypatia
09-14-2011, 08:08 AM
i do hope you sent them email=) just so they know what a fucked-up thing they've done =)

mesaman
09-14-2011, 03:08 PM
Did you ask in giganews.general?

hdjunky
09-14-2011, 03:35 PM
Who did you upload it through? Astraweb? If several large providers had problems maybe it culd have been the original posting server that mangled something? I don't know. How do you tell who changes what around? Just look at the header?

Beck38
09-14-2011, 11:52 PM
How do you tell who changes what around? Just look at the header?

Look at the header, it tells you where the data originated from, and who it went through on it's travels to the server you just d/l'ed it from.

If your posted data ends up not 'arriving' at lots of other servers, then obviously, the posting servers propagation is having problems. If at least one other (independent) server system receives everything okay, then obviously something is going on with those other servers.

Virtually without fail, sending a 'heads up' to 'front office' folks is an exercise in futility. They really don't have a clue.

zot
09-16-2011, 07:20 PM
I've never really understood why Usenet servers even have a 'propagation path' -- I mean, why can't Giganews get Astraweb's posts directly (and vice-versa) without having all those middlemen?

I assume that usenet was just never set up that way.

Beck38
09-16-2011, 08:46 PM
It all depends on the 'peering'.

Now, you and I may agree that Giga and Astra are #1 and #2 (or, #2 and #1), but usenet servers live by those peering agreements, whether they are 'on paper', a 'handshake', ironclad or just a passing thing that was set up in dim history.

What's generally interesting, is that LOTS of stuff posted to one DOES go directly to the other, for hours/days, they out of the blue, things start getting routes through some server you never heard of. After a few hours/days, it changes back to what we'd consider 'normal'.

There may be embedded messages being passed between the servers to say 'hey, I've got some maintenance to do right now, hold off on your direct traffic for the next 'x' hours' or some such. Then again, it might also be something completely random in their operation they do to show that they aren't 'colluding' in passing traffic, I don't know.

But yep, at the end of the day, doesn't make much sense for even the 'top five' servers to send tons of traffic through servers that have 'dodgy' reputations for good (or even fair) operation. That's life I guess.

zot
09-16-2011, 09:34 PM
Now that I think about it, I bet because the system was designed for the concept that Usenet would always consist of very many small, local news servers -- rather than just a few huge servers, as we have with commercial binary servers today. Usenet did indeed start out that way (and text-only servers are still fairly numerous) but the system founders probably never foresaw the rise of binaries (and it's unique set of challenges) as well as corporate consolidation (as in Highwinds buying up a slew of independent providers)

So I'd imagine that with a system of thousands (or tens of thousands) of separate NNTP servers (as perhaps originally envisioned by the creators) it's obvious that relying on direct-peering between every single provider would be far too inefficient.

But with only about a dozen or so major binary servers today, I believe this serial-peering arrangement has few benefits and mainly serves as an additional source of error.

And then I suppose it's also possible that a company would rather not take action that it believes mainly serves to improve the quality of service of it's chief competitor.

Beck38
09-17-2011, 08:31 PM
But with only about a dozen or so major binary servers today, I believe this serial-peering arrangement has few benefits and mainly serves as an additional source of error.

And then I suppose it's also possible that a company would rather not take action that it believes mainly serves to improve the quality of service of it's chief competitor.

Then again, no one provider has such a huge advantage in their userbase that they can stick their noses up in the air and basically say the H*LL with everybody else. Okay, maybe GN if (a BIG IF) they dropped their prices by 2/3rds or thereabouts. hahahaha

There have been many consolidations, buyouts, mergers, etc. over the years, but if you take a look at the traffic counts around the planet, that it hasn't hampered anything, and the continued drop in the cost of storage (and thereby the running of the plants) has meant that 'keeping pace' with that growth has been fairly easy to do.

I first got 'involved' with usenet in 1987, a 'refugee' from Fidonet, and ran my own message-only operation from 1994-8 (ISDN the fastest connection available, after running a dial-up BBS for some 4 years previous). Then as DSL began to be fielded, things started to really take off.

But binaries is the biggie. The 'internet' has an 'internet2', maybe the usenet 'heavyweights' need to get together and figure out some kind of 'usenet2' to route things more efficiently and 'self-correct' the packets, maybe using the par2's, and maybe coming up with an improved upload schema that takes something like JBinUp to the next level (since it's block level now, how about bit level?).

The kind of speed most of us have access to today was unthinkable even 5 years ago, the robustness of the system needs to be improved, from the upload to the storage to the inter-transfers to the download. It's long overdue.

Hypatia
09-18-2011, 11:44 AM
before even thinking about usenet2 we need to put a stop to copyright mafia activity and dmca.
otherwise it wont be of much help
usenet is not like those anonymouse p2p networks.. it has massive ammounts of data centered at one place(several places) and its vulnerable
Humans must deal with these criminals .. and its up to our society whether it will be through fear and blood (get you own 9\11 MPAA!) or through legal measures. it depends heavily on the governments and how bad they want to control internet(people). because make no mistake. its not about money and their so-called "millions" they supposedly lost due to piracy, its about control and power

Beck38
09-22-2011, 05:23 AM
Now, I watch these servers with (probably, maybe, certainly?) a bit too much, but one thing I've noticed with ALL of them (pretty much) is that the socket errors and such happen with clock-like regularity *at the top of the hour*.

Now, I get to wondering exactly where in the transmission path it's being yanked. My ISP? Some fiber link? Now, if it was that, and the interuptions occured right at midnight on the path I know my data is taking, it might make sense. Two o-clock in the afternoon? No. And this is VPN/encrypted traffic, so the ISP has no idea whatsoever what it is or where it's going, except it 'looks' like it's going to a commercial server farm that has nothing whatsoever to do with usenet, P2P, or anything else except business data.

These folks yanking others chains are just typical greedy bastards. You can fine the heck out of them, they don't care, that's the 'cost of doing business'. There is only one thing that gets their attention - the SuperMax. Toss a fiew of them in there, they'll get the hint 'real quick'. We might wish it, but too many judges and procecuters are 'on the take' for that to happen.

Sooner or later, the only thing that will actually work is exactly what worked in the 1930's.

zot
09-26-2011, 06:42 AM
It would not be hard to improve the NNTP protocol by adding additional checking/verification so that uploads would not get corrupted in transit (and servers would constantly check and back-fill from each other, but that's another story).

The big question is who will do it? There's really not much of a central authority anymore. Last year Duke University -- where Usenet was born -- finally shut down its NNTP server and related operations, and the academic community had largely abandoned usenet years earlier.


http://news.slashdot.org/story/10/05/18/2342241/Duke-To-Shut-Down-Usenet-Server
http://news.slashdot.org/story/10/05/18/2342241/Duke-To-Shut-Down-Usenet-Server

Companies like Giganews, Highwinds, Astraweb, etc, could certainly get together and crank out an improved protocol that would be optimized for transferring files (as NNTP was never designed not intended to do) but I think the biggest obstacle by far is the political/legal considerations. Designing an improved usenet system that makes it easier and more efficient for the public to infringe copyright is virtually guaranteed to land these companies in court -- a situation they'd definitely want to avoid.