Discussion:
ATT VP: Internet to hit capacity by 2010
(too old to reply)
Scott Francis
2008-04-18 20:15:16 UTC
Permalink
http://www.news.com/2100-1034_3-6237715.html

I find claims that "soon everything will be HD" somewhat dubious
(working for a company that produces video for online distribution) -
although certainly not as eyebrow-raising as "in 3 years' time, 20
typical households will generate more traffic than the entire Internet
today". Is there some secret plan to put 40Gb ethernet to "typical
households" in the next 3 years that I haven't heard about? I don't
have accurate figures on how much traffic "the entire Internet"
generates, but I'm fairly certain that 5% of it could not be generated
by any single household regardless of equipment installed, torrents
traded or videos downloaded. Even given a liberal application of
Moore's Law, I doubt that would be the case in 2010 either.

Does anybody know what the basis for Mr. Cicconi's claims were (if
they even had a basis at all)? Internal reports from ATT engineering?
Perusal of industry news sources? IRC? A lot of scary numbers were
tossed into the air without any mention of how they were derived. A
cynical person might be tempted to think it was all a scare tactic to
soften up legislators for the next wave of "reasonable network
management" practices that just happen to have significant revenue
streams attached to them ...
--
darkuncle@{gmail.com,darkuncle.net} || 0x5537F527
http://darkuncle.net/pubkey.asc for public key
Stephen John Smoogen
2008-04-18 20:27:07 UTC
Permalink
On Fri, Apr 18, 2008 at 2:15 PM, Scott Francis <***@gmail.com> wrote:
> http://www.news.com/2100-1034_3-6237715.html
>
> I find claims that "soon everything will be HD" somewhat dubious
> (working for a company that produces video for online distribution) -

I think that is based off the all American TV going to HDD that is
supposed to happen in 2009. ( I think I read that currently only 40%
of Americans have HDD TV's and the 60% were not going to buy one until
it became too late. )

> although certainly not as eyebrow-raising as "in 3 years' time, 20
> typical households will generate more traffic than the entire Internet
> today". Is there some secret plan to put 40Gb ethernet to "typical
> households" in the next 3 years that I haven't heard about? I don't
> have accurate figures on how much traffic "the entire Internet"
> generates, but I'm fairly certain that 5% of it could not be generated
> by any single household regardless of equipment installed, torrents
> traded or videos downloaded. Even given a liberal application of
> Moore's Law, I doubt that would be the case in 2010 either.
>
> Does anybody know what the basis for Mr. Cicconi's claims were (if
> they even had a basis at all)? Internal reports from ATT engineering?
> Perusal of industry news sources? IRC? A lot of scary numbers were

Maybe he has been trading on "the Internet is going to die" since 1981
and his shorts on the Internet are coming due in 2010? I mean this
sounds as much like all the other pump and dump things I have read :).

> tossed into the air without any mention of how they were derived. A
> cynical person might be tempted to think it was all a scare tactic to
> soften up legislators for the next wave of "reasonable network
> management" practices that just happen to have significant revenue
> streams attached to them ...
> --
> darkuncle@{gmail.com,darkuncle.net} || 0x5537F527
> http://darkuncle.net/pubkey.asc for public key
>
> _______________________________________________
> NANOG mailing list
> ***@nanog.org
> http://mailman.nanog.org/mailman/listinfo/nanog
>



--
Stephen J Smoogen. -- CSIRT/Linux System Administrator
How far that little candle throws his beams! So shines a good deed
in a naughty world. = Shakespeare. "The Merchant of Venice"
David Coulson
2008-04-18 20:45:08 UTC
Permalink
Stephen John Smoogen wrote:
> I think that is based off the all American TV going to HDD that is
> supposed to happen in 2009. ( I think I read that currently only 40%
> of Americans have HDD TV's and the 60% were not going to buy one until
> it became too late. )
This is not accurate. In 2009 the US is terminating analog (NTSC)
transmission of 'over the air' broadcasts. It has nothing to do with
'high definition' broadcasts. OTA broadcasts will just be done using
ATSC, rather than NTSC. It will continue to provide SD programming.

David
Dragos Ruiu
2008-04-18 22:57:41 UTC
Permalink
On 18-Apr-08, at 1:45 PM, David Coulson wrote:

> Stephen John Smoogen wrote:
>> I think that is based off the all American TV going to HDD that is
>> supposed to happen in 2009. ( I think I read that currently only 40%
>> of Americans have HDD TV's and the 60% were not going to buy one
>> until
>> it became too late. )
> This is not accurate. In 2009 the US is terminating analog (NTSC)
> transmission of 'over the air' broadcasts. It has nothing to do with
> 'high definition' broadcasts. OTA broadcasts will just be done using
> ATSC, rather than NTSC. It will continue to provide SD programming.

Bet you a beer it won't happen. :)

Just like the mandated HD broadcasts in top markets by 1997 or else
they lose license.

cheers,
--dr
David Coulson
2008-04-19 02:21:27 UTC
Permalink
Dragos Ruiu wrote:
> Bet you a beer it won't happen. :)
I will let you know next February when my rabbit ears stop working :)
Jeff Shultz
2008-04-18 20:52:08 UTC
Permalink
Stephen John Smoogen wrote:
> On Fri, Apr 18, 2008 at 2:15 PM, Scott Francis <***@gmail.com> wrote:
>> http://www.news.com/2100-1034_3-6237715.html
>>
>> I find claims that "soon everything will be HD" somewhat dubious
>> (working for a company that produces video for online distribution) -
>
> I think that is based off the all American TV going to HDD that is
> supposed to happen in 2009. ( I think I read that currently only 40%
> of Americans have HDD TV's and the 60% were not going to buy one until
> it became too late. )

I'm part of the 60%... since I'm on satellite I believe I don't need to
switch... in fact it would cost me more to get service in HD now if I
did switch.

I suspect there are a lot of me's out there.

--
Jeff Shultz
Bill Nash
2008-04-18 20:32:58 UTC
Permalink
I wouldn't be shocked at all if this was an element of multi-pronged
lobbying approaches, reminiscent of the 'fiber to the home' tax break
series that hit a handful of years back that got us pretty much nothing.

Given trivial tech milestones like these:
http://www.thelocal.se/7869/20070712/ (2007)
http://www.lightreading.com/document.asp?doc_id=82315 (2005)

I call bullshit.

Besides, by 2010 we'll be staring down a global economy collapse and
people will be too busy trying to find food to get online and download
movies.

- billn

On Fri, 18 Apr 2008, Scott Francis wrote:

> http://www.news.com/2100-1034_3-6237715.html
>
> I find claims that "soon everything will be HD" somewhat dubious
> (working for a company that produces video for online distribution) -
> although certainly not as eyebrow-raising as "in 3 years' time, 20
> typical households will generate more traffic than the entire Internet
> today". Is there some secret plan to put 40Gb ethernet to "typical
> households" in the next 3 years that I haven't heard about? I don't
> have accurate figures on how much traffic "the entire Internet"
> generates, but I'm fairly certain that 5% of it could not be generated
> by any single household regardless of equipment installed, torrents
> traded or videos downloaded. Even given a liberal application of
> Moore's Law, I doubt that would be the case in 2010 either.
>
> Does anybody know what the basis for Mr. Cicconi's claims were (if
> they even had a basis at all)? Internal reports from ATT engineering?
> Perusal of industry news sources? IRC? A lot of scary numbers were
> tossed into the air without any mention of how they were derived. A
> cynical person might be tempted to think it was all a scare tactic to
> soften up legislators for the next wave of "reasonable network
> management" practices that just happen to have significant revenue
> streams attached to them ...
>
Marshall Eubanks
2008-04-18 20:38:28 UTC
Permalink
On Apr 18, 2008, at 4:15 PM, Scott Francis wrote:

> http://www.news.com/2100-1034_3-6237715.html
>
> I find claims that "soon everything will be HD" somewhat dubious
> (working for a company that produces video for online distribution) -
> although certainly not as eyebrow-raising as "in 3 years' time, 20
> typical households will generate more traffic than the entire Internet

Maybe if "typical household" is defined as "close relatives of Peter
Lothberg."

Either that, or he meant 30 instead of 3.

Regards
Marshall

>
> today". Is there some secret plan to put 40Gb ethernet to "typical
> households" in the next 3 years that I haven't heard about? I don't
> have accurate figures on how much traffic "the entire Internet"
> generates, but I'm fairly certain that 5% of it could not be generated
> by any single household regardless of equipment installed, torrents
> traded or videos downloaded. Even given a liberal application of
> Moore's Law, I doubt that would be the case in 2010 either.
>
> Does anybody know what the basis for Mr. Cicconi's claims were (if
> they even had a basis at all)? Internal reports from ATT engineering?
> Perusal of industry news sources? IRC? A lot of scary numbers were
> tossed into the air without any mention of how they were derived. A
> cynical person might be tempted to think it was all a scare tactic to
> soften up legislators for the next wave of "reasonable network
> management" practices that just happen to have significant revenue
> streams attached to them ...
> --
> darkuncle@{gmail.com,darkuncle.net} || 0x5537F527
> http://darkuncle.net/pubkey.asc for public key
>
> _______________________________________________
> NANOG mailing list
> ***@nanog.org
> http://mailman.nanog.org/mailman/listinfo/nanog
Patrick W. Gilmore
2008-04-18 20:40:19 UTC
Permalink
On Apr 18, 2008, at 4:15 PM, Scott Francis wrote:

> http://www.news.com/2100-1034_3-6237715.html
>
> I find claims that "soon everything will be HD" somewhat dubious
> (working for a company that produces video for online distribution) -
> although certainly not as eyebrow-raising as "in 3 years' time, 20
> typical households will generate more traffic than the entire Internet
> today". Is there some secret plan to put 40Gb ethernet to "typical
> households" in the next 3 years that I haven't heard about? I don't
> have accurate figures on how much traffic "the entire Internet"
> generates, but I'm fairly certain that 5% of it could not be generated
> by any single household regardless of equipment installed, torrents
> traded or videos downloaded. Even given a liberal application of
> Moore's Law, I doubt that would be the case in 2010 either.

40 Gbps? Does anyone think the Internet has fewer than twenty 40 Gbps
links' worth of traffic? I know individual networks that have more
traffic.

Could we get 100 Gbps to the home by 2010? Hell, we're having trouble
getting 100 Gbps to the CORE by 2010 thanx to companies like Sun
forcing 40 Gbps ethernet down the IEEE's throat.

Not that 100 Gbps would be enough anyway to make his statement true.


> Does anybody know what the basis for Mr. Cicconi's claims were (if
> they even had a basis at all)?

His answers are so far off, they're not even wrong.

Basis? You don't need a basis for such blatantly and objectively
false information that even the most newbie neophyte laughs their ass
off while reading it.

Good thing C|Net asked "vice president of legislative affairs" about
traffic statistics. Or maybe they didn't ask, but they sure
listened. Perhaps they should ask the Network Architect about the
legislative implications around NN laws. Actually, they would
probably get more useful answers than asking a lawyer about bandwidth.

C|Net--

I'd say the same about at&t, but ....

--
TTFN,
patrick



> Internal reports from ATT engineering?
> Perusal of industry news sources? IRC? A lot of scary numbers were
> tossed into the air without any mention of how they were derived. A
> cynical person might be tempted to think it was all a scare tactic to
> soften up legislators for the next wave of "reasonable network
> management" practices that just happen to have significant revenue
> streams attached to them ...
> --
> darkuncle@{gmail.com,darkuncle.net} || 0x5537F527
> http://darkuncle.net/pubkey.asc for public key
>
> _______________________________________________
> NANOG mailing list
> ***@nanog.org
> http://mailman.nanog.org/mailman/listinfo/nanog
>
Williams, Marc
2008-04-18 20:56:22 UTC
Permalink
If the cable operators put their broadcast content onto an access
network multicast . . . Then how could they resell the same content to
europe?



> -----Original Message-----
> From: Scott Francis [mailto:***@gmail.com]
> Sent: Friday, April 18, 2008 4:15 PM
> To: ***@nanog.org
> Subject: [Nanog] ATT VP: Internet to hit capacity by 2010
>
> http://www.news.com/2100-1034_3-6237715.html
>
> I find claims that "soon everything will be HD" somewhat
> dubious (working for a company that produces video for online
> distribution) - although certainly not as eyebrow-raising as
> "in 3 years' time, 20 typical households will generate more
> traffic than the entire Internet today". Is there some secret
> plan to put 40Gb ethernet to "typical households" in the next
> 3 years that I haven't heard about? I don't have accurate
> figures on how much traffic "the entire Internet"
> generates, but I'm fairly certain that 5% of it could not be
> generated by any single household regardless of equipment
> installed, torrents traded or videos downloaded. Even given a
> liberal application of Moore's Law, I doubt that would be the
> case in 2010 either.
>
> Does anybody know what the basis for Mr. Cicconi's claims
> were (if they even had a basis at all)? Internal reports from
> ATT engineering?
> Perusal of industry news sources? IRC? A lot of scary numbers
> were tossed into the air without any mention of how they were
> derived. A cynical person might be tempted to think it was
> all a scare tactic to soften up legislators for the next wave
> of "reasonable network management" practices that just happen
> to have significant revenue streams attached to them ...
> --
> darkuncle@{gmail.com,darkuncle.net} || 0x5537F527
> http://darkuncle.net/pubkey.asc for public key
>
> _______________________________________________
> NANOG mailing list
> ***@nanog.org
> http://mailman.nanog.org/mailman/listinfo/nanog
>
Marc Manthey
2008-04-18 21:45:12 UTC
Permalink
>
> If the cable operators put their broadcast content onto an access
> network multicast . . . Then how could they resell the same content to
> europe?

hello,

my biggest problem in understanding the ip6 / multicast concept is
" if the whole internet were multicast enabled " and there is no
unicast stream would´nt this not
decrease_the_traffic_to_a_reasonable amount ??!!

regards

marc

-
Too often we enjoy the comfort of opinion
without the discomfort of thought.
-- John F. Kennedy, 35th US president

Les enfants teribbles - research and deployment
Marc Manthey - Hildeboldplatz 1a
D - 50672 Köln - Germany
Tel.:0049-221-3558032
Mobil:0049-1577-3329231
jabber :***@kgraff.net
blog : http://www.let.de
ipv6 http://stattfernsehen.com/matrix
Mike Lieman
2008-04-18 22:06:48 UTC
Permalink
On Fri, Apr 18, 2008 at 4:15 PM, Scott Francis <***@gmail.com> wrote:
> http://www.news.com/2100-1034_3-6237715.html
>

It's a FUD attempt to get people to forget about how AT&T owes
everyone in the US with a telephone a check for $150,000.00 in
statutory penalties for their unlawful spying.
Mike Lieman
2008-04-18 22:23:44 UTC
Permalink
On Fri, Apr 18, 2008 at 6:20 PM, Kevin Oberman <***@es.net> wrote:
> > Date: Fri, 18 Apr 2008 18:06:48 -0400
> > From: "Mike Lieman" <***@gmail.com>
> >
> > On Fri, Apr 18, 2008 at 4:15 PM, Scott Francis <***@gmail.com> wrote:
> > > http://www.news.com/2100-1034_3-6237715.html
> > >
> >
> > It's a FUD attempt to get people to forget about how AT&T owes
> > everyone in the US with a telephone a check for $150,000.00 in
> > statutory penalties for their unlawful spying.

If it's impossible to hold AT&T accountable for violating the Law in
such a blatant, wholesale manner, how could anyone believe that they
could be held accountable to whatever Network Neutrality standards
would be ensconced in Law?
Jeff Shultz
2008-04-18 22:44:18 UTC
Permalink
Mike Lieman wrote:
> On Fri, Apr 18, 2008 at 6:20 PM, Kevin Oberman <***@es.net> wrote:
>>> Date: Fri, 18 Apr 2008 18:06:48 -0400
>> > From: "Mike Lieman" <***@gmail.com>
>> >
>> > On Fri, Apr 18, 2008 at 4:15 PM, Scott Francis <***@gmail.com> wrote:
>> > > http://www.news.com/2100-1034_3-6237715.html
>> > >
>> >
>> > It's a FUD attempt to get people to forget about how AT&T owes
>> > everyone in the US with a telephone a check for $150,000.00 in
>> > statutory penalties for their unlawful spying.
>
> If it's impossible to hold AT&T accountable for violating the Law in
> such a blatant, wholesale manner, how could anyone believe that they
> could be held accountable to whatever Network Neutrality standards
> would be ensconced in Law?
>

Are we really going to get into politics here? I smell trolls.

--
Jeff Shultz
Alex Pilosov
2008-04-18 22:57:45 UTC
Permalink
On Fri, 18 Apr 2008, Jeff Shultz wrote:

> Mike Lieman wrote:
> > On Fri, Apr 18, 2008 at 6:20 PM, Kevin Oberman <***@es.net> wrote:
> >>> Date: Fri, 18 Apr 2008 18:06:48 -0400
> >> > From: "Mike Lieman" <***@gmail.com>
> >> >
> >> > On Fri, Apr 18, 2008 at 4:15 PM, Scott Francis <***@gmail.com> wrote:
> >> > > http://www.news.com/2100-1034_3-6237715.html
> >> > >
> >> >
> >> > It's a FUD attempt to get people to forget about how AT&T owes
> >> > everyone in the US with a telephone a check for $150,000.00 in
> >> > statutory penalties for their unlawful spying.
> >
> > If it's impossible to hold AT&T accountable for violating the Law in
> > such a blatant, wholesale manner, how could anyone believe that they
> > could be held accountable to whatever Network Neutrality standards
> > would be ensconced in Law?
> >
>
> Are we really going to get into politics here? I smell trolls.
Yes, this is getting very offtopic very fast. Politics, philosophy and
legal are explicitly forbidden on the list, and this hits all 3.

Could y'all knock it off, please?

Please see this for NANOG AUP: http://www.nanog.org/aup.html

Off-topic:

* Whining as in, "so-and-so are terrible lawbreakers and they owe
us".

* Network neutrality (this has been discussed to death here) - unless you
have something poignant to add and you've read in detail what has been
said previously.

* Anything political that does not have operational impact.

* Anything legal that does not have operational impact.

On-topic:

* Operational impact of legal/political/financial external constraints.

-alex
Scott Weeks
2008-04-19 00:25:44 UTC
Permalink
--- ***@gmail.com wrote:
From: "Scott Francis" <***@gmail.com>

Does anybody know what the basis for Mr. Cicconi's claims were (if
they even had a basis at all)?
----------------------------------------


From: Bill Nash <***@billn.net>

I wouldn't be shocked at all if this was an element of multi-pronged
lobbying approaches...
----------------------------------------



Look at who is saying it and it's quite obvious...


"Jim Cicconi, vice president of legislative affairs for AT&T, warned...

scott"
Sean Donelan
2008-04-19 19:16:19 UTC
Permalink
On Fri, 18 Apr 2008, Scott Weeks wrote:
> Does anybody know what the basis for Mr. Cicconi's claims were (if
> they even had a basis at all)?

Have there been an second reporting sources, or does anyone have a Youtube
link of Mr. Cicconi's actual statement in context? So far there seems to
only be a single reporter's account, echoed in the bloggerdome.
Tomas L. Byrnes
2008-04-19 19:44:08 UTC
Permalink
In my experience, ATT(SBC at that time) hit over its effective capacity
(over 50% average utilization, and therefore no redundancy) around 2001.

At least for clients I was working with, it was always evident that they
didn't have enough capacity in any node to carry the traffic if they had
a problem on any single upstream link. They also tended to manually
handle routing decisions as opposed to letting the IGP handle it.

Given the nature of the beast, I doubt that has changed much, and the
anecdotal evidence posted here, most recently related to ATT/Cogent
peering, bears that out.

So, maybe from ATT's perspective the Internet (meaning their backbone)
WILL be saturated by 2010.

Since the Internet is a network of independent internets connected to
each other, I'd like to know how Cicconi knows what the level of
saturation of everyone else's backbone is, or their available dark
capacity. I would think those are trade secrets that are closely
guarded.

It seems what we have here is ATT trying to create public hue and cry,
so that the taxpayer will be compelled to pay for their required and
overdue network upgrades, instead of themselves; or in order to get
further regulatory relief in the name of investing in their
infrastructure, as was done in the late '90s. Given their, and other's,
track records with the subsidies and regulatory relief they were given
in the late '90s, which they used to bankrupt the CLECs, and then passed
the increased revenue onto shareholders, rather than investing in
infrastructure, I'd be disinclined to give them what they want.

The US lags the world in Broadband not because the FCC and PUCs
hamstring the ILECs, but because of the disincentive for for-profit
common stock companies with government granted monopolies to do much
more than the bare minimum capital investment to keep operating costs
low and competitors out of the market, while maximizing revenue from
existing sunk cost. Would be competitors, on the other hand, have to
make massive capital investments that require a long recovery period or
high short-term prices, and are easily bankrupted by predatory pricing
by the incumbents.


> -----Original Message-----
> From: Sean Donelan [mailto:***@donelan.com]
> Sent: Saturday, April 19, 2008 12:16 PM
> To: Scott Weeks
> Cc: ***@nanog.org
> Subject: Re: [Nanog] ATT VP: Internet to hit capacity by 2010
>
> On Fri, 18 Apr 2008, Scott Weeks wrote:
> > Does anybody know what the basis for Mr. Cicconi's claims were (if
> > they even had a basis at all)?
>
> Have there been an second reporting sources, or does anyone
> have a Youtube link of Mr. Cicconi's actual statement in
> context? So far there seems to only be a single reporter's
> account, echoed in the bloggerdome.
>
>
> _______________________________________________
> NANOG mailing list
> ***@nanog.org
> http://mailman.nanog.org/mailman/listinfo/nanog
>
Paul Wall
2008-04-21 04:49:45 UTC
Permalink
On Sat, Apr 19, 2008 at 3:44 PM, Tomas L. Byrnes <***@byrneit.net> wrote:
> In my experience, ATT(SBC at that time) hit over its effective capacity
> (over 50% average utilization, and therefore no redundancy) around 2001.

Sounds like you're talking about 7018, not 7132 (SBC), and even 7018
is doing okay for capacity now that its high-traffic customers
(Comcast) are moving traffic elsewhere.

Do you have any specific data to share with the NANOG community
supporting of these claims?

> At least for clients I was working with, it was always evident that they
> didn't have enough capacity in any node to carry the traffic if they had
> a problem on any single upstream link. They also tended to manually
> handle routing decisions as opposed to letting the IGP handle it.

Likewise, I'd be interested in implementation specifics of how a
network of AT&T's caliber could implement backbone redundancy and TE
with static routing. Any data you could share would be extremely
helpful.

Paul Wall
Randy Bush
2008-04-21 04:55:03 UTC
Permalink
Paul Wall wrote:
>> They also tended to manually handle routing decisions as opposed to
>> letting the IGP handle it.
> Likewise, I'd be interested in implementation specifics of how a
> network of AT&T's caliber could implement backbone redundancy and TE
> with static routing.

atm-2, circuitzilla's dream machine.

randy
Ted Fischer
2008-04-20 15:51:44 UTC
Permalink
All,

Interesting AT&T project ... the IP (and voice) world according
to AT&T, from a New York State of Mind:

http://senseable.mit.edu/nyte/index.html

Ted


At 03:16 PM 4/19/2008, Sean wrote:
>On Fri, 18 Apr 2008, Scott Weeks wrote:
> > Does anybody know what the basis for Mr. Cicconi's claims were (if
> > they even had a basis at all)?
>
>Have there been an second reporting sources, or does anyone have a Youtube
>link of Mr. Cicconi's actual statement in context? So far there seems to
>only be a single reporter's account, echoed in the bloggerdome.
>
>
>_______________________________________________
>NANOG mailing list
>***@nanog.org
>http://mailman.nanog.org/mailman/listinfo/nanog
WWWhatsup
2008-04-22 05:01:46 UTC
Permalink
I am pretty sure he is basing it on this:
http://www.internetinnovation.org/tabid/56/articleType/ArticleView/articleId/94/Default.aspx

which itself refers to the Nemertes report, issued last November:
"The Internet Singularity, Delayed: Why Limits in Internet Capacity Will Stifle Innovation on the Web"
http://www.nemertes.com/internet_singularity_delayed_why_limits_internet_capacity_will_stifle_innovation_web
and much discussed at the time - http://www.isoc-ny.org/?p=13 and elsewhere.

Joly MacFie

http://isoc-ny.org/


>From: "Scott Francis" <***@gmail.com>
>
>Does anybody know what the basis for Mr. Cicconi's claims were (if
>they even had a basis at all)?
>----------------------------------------

---------------------------------------------------------------
WWWhatsup NYC
http://pinstand.com - http://punkcast.com
---------------------------------------------------------------
Jorge Amodio
2008-04-19 19:16:38 UTC
Permalink
I believe you have to take in account from whom and where some
assertions are coming from.

The article is full of gaffes, just to mention one "Internet exists, thanks
to the infrastructure provided by a group of mostly private companies".

AFAIK, most of the telecommunication companies and technology
providers that conform the core infrastructure of the net are
public traded companies, including AT&T.

And I concur that even with the dramatic traffic increase due HD media
is hard to believe that "20 typical households will generate more traffic
than the entire Internet today" in three years.
Perhaps he is transpiring what from a legal point of view AT&T thinks
about "Net Neutrality" and his take about public/consortium vs private
traffic policying.

My .02
David Conrad
2008-04-21 00:44:14 UTC
Permalink
Not to defend AT&T or the statement regarding capacity, but...

On Apr 20, 2008, at 4:16 AM, Jorge Amodio wrote:
> The article is full of gaffes, just to mention one "Internet exists,
> thanks
> to the infrastructure provided by a group of mostly private
> companies".

I suspect this was referencing the difference between "public" as in
governmentally owned/operated (e.g., most of the highway system in the
US) vs. "private" that is non-governmentally owned/operated. The
Internet of today does indeed exist because of private efforts.

Regards,
-drc
Barry Shein
2008-04-22 20:50:18 UTC
Permalink
On April 21, 2008 at 09:44 ***@virtualized.org (David Conrad) wrote:
>
> I suspect this was referencing the difference between "public" as in
> governmentally owned/operated (e.g., most of the highway system in the
> US) vs. "private" that is non-governmentally owned/operated. The
> Internet of today does indeed exist because of private efforts.

But several of the major players in the net neutrality issue are
beneficiaries of legal monopolies (e.g., just try to go into the
landline voice business in Verizon's territory) and thus regulated for
good reason.

I think once a company accepts a legally enforced monopoly, sometimes
with 100M or more customers, they're not really a private company.

If they want the freedoms of a purely private company then they should
renounce their monopolies.

I wouldn't hold my breath.

I realize others involved on the same side are not legal monopolies,
though even cable TV companies have legally enforced monopolies or
near monopolies on the catv wire plants in many of their customer
regions.

Remove the companies with the legal monopolies from the net neutrality
issue (i.e., demand net neutrality only from the monopoly
beneficiaries) and would this be much of an issue?

Not really.

That's because what you'd be left with is *competition*.

But how can anyone seriously compete with companies who can
cross-subsidize from legally enforced monopolies of 100M customers,
including every single business in their region which is often
delineated in chunks like "all of the northeastern united states" or
thereabouts?

Fair is fair: They shouldn't be able to have it both ways and be able
to cry "legal monopoly!" when someone tries to compete with them and
"private company!" when the monopoly grantors try to reasonably
regulate that monopoly-derived power.

It's an awesome market power they have been granted. We shouldn't let
them use it to control other markets.

--
-Barry Shein

The World | ***@TheWorld.com | http://www.TheWorld.com
Purveyors to the Trade | Voice: 800-THE-WRLD | Login: Nationwide
Software Tool & Die | Public Access Internet | SINCE 1989 *oo*
Scott Weeks
2008-04-21 03:42:48 UTC
Permalink
--- ***@donelan.com wrote:
From: Sean Donelan <***@donelan.com>

On Fri, 18 Apr 2008, Scott Weeks wrote:
> Does anybody know what the basis for Mr. Cicconi's claims were (if
> they even had a basis at all)?

Have there been an second reporting sources, or does anyone have a Youtube
link of Mr. Cicconi's actual statement in context? So far there seems to
only be a single reporter's account, echoed in the bloggerdome.
------------------------------------------


For the record, I didn't say the above. I said this:

---------------------------------
Look at who is saying it and it's quite obvious...
"Jim Cicconi, vice president of legislative affairs for AT&T, warned...
---------------------------------

I looked around for text or video from Mr. Cicconi at the "Westminster eForum" but can't find anything.

www.westminsterforumprojects.co.uk/eforum/default.aspx

scott
Paul Ferguson
2008-04-21 04:33:51 UTC
Permalink
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

- -- "Scott Weeks" <***@mauigateway.com> wrote:

>I looked around for text or video from Mr. Cicconi at the "Westminster
>eForum" but can't find anything.
>
>www.westminsterforumprojects.co.uk/eforum/default.aspx
>

For what it's worth, I agree with Ryan Paul's summary of the issues
here:

http://arstechnica.com/news.ars/post/20080420-analysis-att-fear-mongering-o
n-net-capacity-mostly-fud.html

...but take it at face value.

$.02,

- - ferg

-----BEGIN PGP SIGNATURE-----
Version: PGP Desktop 9.6.3 (Build 3017)

wj8DBQFIDBkpq1pz9mNUZTMRAlZ1AKCehJ0/xwgXXA9RBRwuIWfcLGp+9ACfbcJw
lsmtPaDeGkV5/PllhBqBV88=
=z8LR
-----END PGP SIGNATURE-----


--
"Fergie", a.k.a. Paul Ferguson
Engineering Architecture for the Internet
fergdawg(at)netzero.net
ferg's tech blog: http://fergdawg.blogspot.com/
Sean Donelan
2008-04-21 16:18:15 UTC
Permalink
On Mon, 21 Apr 2008, Paul Ferguson wrote:
>> I looked around for text or video from Mr. Cicconi at the "Westminster
>> eForum" but can't find anything.
>>
>> www.westminsterforumprojects.co.uk/eforum/default.aspx
>>
>
> For what it's worth, I agree with Ryan Paul's summary of the issues
> here:

The rest of the story?

http://www.usatoday.com/tech/products/services/2008-04-20-internet-broadband-traffic-jam_N.htm

By 2010, the average household will be using 1.1 terabytes (roughly
equal to 1,000 copies of the Encyclopedia Britannica) of bandwidth a
month, according to an estimate by the Internet Innovation Alliance in
Washington, D.C. At that level, it says, 20 homes would generate more
traffic than the entire Internet did in 1995.

How many folks remember InternetMCI's lack of capacity in the 1990's
when it actually needed to stop installing new Internet connections
because InternetMCI didn't have any more capacity for several months.
Steve Gibbard
2008-04-21 19:12:10 UTC
Permalink
On Mon, 21 Apr 2008, Sean Donelan wrote:

> The rest of the story?
>
> http://www.usatoday.com/tech/products/services/2008-04-20-internet-broadband-traffic-jam_N.htm
>
> By 2010, the average household will be using 1.1 terabytes (roughly
> equal to 1,000 copies of the Encyclopedia Britannica) of bandwidth a
> month, according to an estimate by the Internet Innovation Alliance in
> Washington, D.C. At that level, it says, 20 homes would generate more
> traffic than the entire Internet did in 1995.
>
> How many folks remember InternetMCI's lack of capacity in the 1990's
> when it actually needed to stop installing new Internet connections
> because InternetMCI didn't have any more capacity for several months.

I've been on the side arguing that there's going to be enough growth to
cause interesting issues (which is very different than arguing for any
specific remedy that the telcos think will be in their benefit), but the
numbers quoted above strike me as an overstatement.

Let's look at the numbers:

iTunes video, which looks perfectly acceptable on my old NTSC TV, is .75
gigabytes per viewable hour. I think HDTV is somewhere around 8 megabits
per second (if I'm remembering correctly; I may be wrong about that),
which would translate to one megabyte per second, or 3.6 gigabytes per
hour.

For iTunes video, 1.1 terabytes would be 1,100 gigabytes, or 1,100 / .75 =
1,467 hours. 1,467 / 30 = 48.9 hours of video per day. Even assuming we
divide that among three or four people in a household, that's staggering.

For HDTV, 1,100 gigabytes would be 1,100 / 3.6 = 306 hours per month. 306
/ 30 = 10.2 hours per day.

Maybe I just don't spend enough time around the "leave the TV on all day"
demographic. Is that a realistic number? Is there something bigger than
HDTV video that ATT expects people to start downloading?

-Steve
David Coulson
2008-04-21 19:22:32 UTC
Permalink
Steve Gibbard wrote:
> Maybe I just don't spend enough time around the "leave the TV on all day"
> demographic. Is that a realistic number? Is there something bigger than
> HDTV video that ATT expects people to start downloading?
>
I would not be surprised if many households watch more than 10hrs of TV
per day. My trusty old series 2 TiVo often records 5-8hrs of TV per day,
even if I don't watch any of it.

Right now I can get 80 or so channels of basic cable, and who knows how
many of Digital Cable/Satellite for as many TVs as I can fit in my house
without the Internet buckling under the pressure. I assume AT&T is just
saying "We use this pipe for TV and Internet, hence all TV is now
considered Internet traffic"? How many people are REALLY going to be
pulling 10hrs of HD or even SD TV across their Internet connection,
rather than just taking what is Multicasted from a Satellite base
station by their TV service provider? Is there something significant
about AT&T's model (other than the VDSL over twisted pair, rather than
coax/fiber to the prem) that makes them more afraid than Comcast,
Charter or Cox?

Maybe I'm just totally missing something - Wouldn't be the first time.
Why would TV of any sort even touch the 'Internet'. And, no, YouTube is
not "TV" as far as I'm concerned.
Williams, Marc
2008-04-21 19:52:51 UTC
Permalink
> Why would TV of any sort even touch the 'Internet'. And, no,
> YouTube is not "TV" as far as I'm concerned.

FWIW:

http://www.worldmulticast.com/marketsummary.html
Chris Adams
2008-04-21 19:43:14 UTC
Permalink
Once upon a time, Steve Gibbard <***@gibbard.org> said:
> iTunes video, which looks perfectly acceptable on my old NTSC TV, is .75
> gigabytes per viewable hour. I think HDTV is somewhere around 8 megabits
> per second (if I'm remembering correctly; I may be wrong about that),
> which would translate to one megabyte per second, or 3.6 gigabytes per
> hour.

You're a little low. ATSC (the over-the-air digital broadcast format)
is 19 megabits per second or 8.55 gigabytes per hour. My TiVo probably
records 12-20 hours per day (I don't watch all that of course), often
using two tuners (so up to 38 megabits per second). That's not all HD
today of course, but the percentage that is HD is going up.

1.1 terabytes of ATSC-level HD would be a little over 4 hours a day. If
you have a family with multiple TVs, that's easy to hit.

That also assumes that we get 40-60 megabit connections (2-3 ATSC format
channels) that can sustain that level of traffic to the household with
widespread deployment in 2 years and that the "average" household hooks
it up to their TVs.

--
Chris Adams <***@hiwaay.net>
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.
Simon Lockhart
2008-04-21 19:57:39 UTC
Permalink
On Mon Apr 21, 2008 at 02:43:14PM -0500, Chris Adams wrote:
> You're a little low. ATSC (the over-the-air digital broadcast format)
> is 19 megabits per second or 8.55 gigabytes per hour.

I think you're too high there! MPEG2 SD is around 4-6Mbps, MPEG4 SD is around
2-4Mbps, MPEG4 HD is anywhere from 8 to 20Mbps, depending on how much wow
factor the broadcaster is trying to give.

A typical satellite TV multiplex is 20-30Mbps for 4-8 channels, depending
on how much the broadcaster pays for higher bitrate, and thus higher quality.

Simon
--
Simon Lockhart | * Sun Server Colocation * ADSL * Domain Registration *
Director | * Domain & Web Hosting * Internet Consultancy *
Bogons Ltd | * http://www.bogons.net/ * Email: ***@bogons.net *
Chris Adams
2008-04-21 20:12:16 UTC
Permalink
Once upon a time, Simon Lockhart <***@slimey.org> said:
> On Mon Apr 21, 2008 at 02:43:14PM -0500, Chris Adams wrote:
> > You're a little low. ATSC (the over-the-air digital broadcast format)
> > is 19 megabits per second or 8.55 gigabytes per hour.
>
> I think you're too high there! MPEG2 SD is around 4-6Mbps, MPEG4 SD is around
> 2-4Mbps, MPEG4 HD is anywhere from 8 to 20Mbps, depending on how much wow
> factor the broadcaster is trying to give.

Nope, ATSC is 19 (more accurately 19.28) megabits per second. That can
carry multiple sub-channels, or it can be used for a single channel.
Standard definition DVDs can be up to 10 megabits per second. Both only
use MPEG2; MPEG4 can be around half that for similar quality. The base
Blu-Ray data rate is 36 megabits per second (to allow for high quality
MPEG2 at up to 1080p60 resolution).

--
Chris Adams <***@hiwaay.net>
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.
Ric Messier
2008-04-21 20:25:07 UTC
Permalink
On Mon, 21 Apr 2008, Chris Adams wrote:

>
> Nope, ATSC is 19 (more accurately 19.28) megabits per second. That can
> carry multiple sub-channels, or it can be used for a single channel.
> Standard definition DVDs can be up to 10 megabits per second. Both only
> use MPEG2; MPEG4 can be around half that for similar quality. The base
> Blu-Ray data rate is 36 megabits per second (to allow for high quality
> MPEG2 at up to 1080p60 resolution).
>

>From wikipedia (see: Appeal to authority :-):
The different resolutions can operate in progressive scan or interlaced
mode, although the highest 1080-line system cannot display progressive
images at the rate of 59.94 or 60 frames per second. (Such technology was
seen as too advanced at the time, plus the image quality was deemed to be
too poor considering the amount of data that can be transmitted.) A
terrestrial (over-the-air) transmission carries 19.39 megabits of data per
second, compared to a maximum possible bitrate of 10.08 Mbit/s allowed in
the DVD standard.


Ric
Dorn Hetzel
2008-04-21 20:37:38 UTC
Permalink
My directivo records wads of stuff every day, but they are the same bits
that rain down on gazillions of other potential recorders and viewers.
Incremental cost to serve one more household, pretty much zero.

There are definitely narrowcast applications that don't make sense to
broadcast down from a bird, but it also makes no sense at all to claim for
capacity planning purposes that every household will need a unicast IP
stream of all it's TV viewing capacity...

-dorn

On Mon, Apr 21, 2008 at 4:25 PM, Ric Messier <***@washere.com> wrote:

>
> On Mon, 21 Apr 2008, Chris Adams wrote:
>
> >
> > Nope, ATSC is 19 (more accurately 19.28) megabits per second. That can
> > carry multiple sub-channels, or it can be used for a single channel.
> > Standard definition DVDs can be up to 10 megabits per second. Both only
> > use MPEG2; MPEG4 can be around half that for similar quality. The base
> > Blu-Ray data rate is 36 megabits per second (to allow for high quality
> > MPEG2 at up to 1080p60 resolution).
> >
>
> >From wikipedia (see: Appeal to authority :-):
> The different resolutions can operate in progressive scan or interlaced
> mode, although the highest 1080-line system cannot display progressive
> images at the rate of 59.94 or 60 frames per second. (Such technology was
> seen as too advanced at the time, plus the image quality was deemed to be
> too poor considering the amount of data that can be transmitted.) A
> terrestrial (over-the-air) transmission carries 19.39 megabits of data per
> second, compared to a maximum possible bitrate of 10.08 Mbit/s allowed in
> the DVD standard.
>
>
> Ric
>
>
> _______________________________________________
> NANOG mailing list
> ***@nanog.org
> http://mailman.nanog.org/mailman/listinfo/nanog
>
m***@bt.com
2008-04-22 10:33:44 UTC
Permalink
> > I think you're too high there! MPEG2 SD is around 4-6Mbps,
> MPEG4 SD is
> > around 2-4Mbps, MPEG4 HD is anywhere from 8 to 20Mbps, depending on
> > how much wow factor the broadcaster is trying to give.
>
> Nope, ATSC is 19 (more accurately 19.28) megabits per second.

So why would anyone plug an ATSC feed directly into the Internet?
Are there any devices that can play it other than a TV set?
Why wouldn't a video services company transcode it to MPEG4 and
transmit that?

I can see that some cable/DSL companies might transmit ATSC to
subscribers
but they would also operate local receivers so that the traffic never
touches their core. Rather like what a cable company does today with TV
receivers in their head ends.

All this talk of exafloods seems to ignore the basic economics of
IP networks. No ISP is going to allow subscribers to pull in 8gigs
per day of video stream. And no broadcaster is going to pay for the
bandwidth needed to pump out all those ATSC streams. And nobody is
going to stick IP multicast (and multicast peering) in the core just
to deal with video streams to people who leave their TV on all day
whether
they are at home or not.

At best you will see IP multicast on a city-wide basis in a single
ISP's network. Also note that IP multicast only works for live broadcast
TV. In today's world there isn't much of that except for news.
Everything
else is prerecorded and thus it COULD be transmitted at any time. IP
multicast
does not help you when you have 1000 subscribers all pulling in 1000
unique
streams. In the 1960's it was reasonable to think that you could deliver
the
same video to all consumers because everybody was the same in one big
melting
pot. But that day is long gone.

On the other hand, P2P software could be leveraged to download video
files
during off-peak hours on the network. All it takes is some cooperation
between
P2P software developers and ISPs so that you have P2P clients which can
be told
to lay off during peak hours, or when they want something from the other
side
of a congested peering circuit. Better yet, the ISP's P2P manager could
arrange
for one full copy of that file to get across the congested peering
circuit during
the time period most favorable for that single circuit, then distribute
elsewhere.

--Michael Dillon

As far as I am concerned the killer application for IP multicast is
*NOT* video,
it's market data feeds from NYSE, NASDAQ, CBOT, etc.
Dorn Hetzel
2008-04-22 11:25:30 UTC
Permalink
It's certainly not reasonable to assume the same video goes to all
consumers, but on the other hand, there *is* plenty of video that goes to a
*lot* of consumers. I don't really need my own personal unicast copy of the
bits that make up an episode of BSG or whatever. I would hope that the
future has even more tivo-like devices at the consumer edge that can take
advantage of the right (desired) bits whenever they are available. A single
"box" that can take bits off the bird or cable tv when what it wants is
found there or request over IP when it needs to doesn't seem like rocket
science...

-dorn

On Tue, Apr 22, 2008 at 6:33 AM, <***@bt.com> wrote:

>
> > > I think you're too high there! MPEG2 SD is around 4-6Mbps,
> > MPEG4 SD is
> > > around 2-4Mbps, MPEG4 HD is anywhere from 8 to 20Mbps, depending on
> > > how much wow factor the broadcaster is trying to give.
> >
> > Nope, ATSC is 19 (more accurately 19.28) megabits per second.
>
> So why would anyone plug an ATSC feed directly into the Internet?
> Are there any devices that can play it other than a TV set?
> Why wouldn't a video services company transcode it to MPEG4 and
> transmit that?
>
> I can see that some cable/DSL companies might transmit ATSC to
> subscribers
> but they would also operate local receivers so that the traffic never
> touches their core. Rather like what a cable company does today with TV
> receivers in their head ends.
>
> All this talk of exafloods seems to ignore the basic economics of
> IP networks. No ISP is going to allow subscribers to pull in 8gigs
> per day of video stream. And no broadcaster is going to pay for the
> bandwidth needed to pump out all those ATSC streams. And nobody is
> going to stick IP multicast (and multicast peering) in the core just
> to deal with video streams to people who leave their TV on all day
> whether
> they are at home or not.
>
> At best you will see IP multicast on a city-wide basis in a single
> ISP's network. Also note that IP multicast only works for live broadcast
> TV. In today's world there isn't much of that except for news.
> Everything
> else is prerecorded and thus it COULD be transmitted at any time. IP
> multicast
> does not help you when you have 1000 subscribers all pulling in 1000
> unique
> streams. In the 1960's it was reasonable to think that you could deliver
> the
> same video to all consumers because everybody was the same in one big
> melting
> pot. But that day is long gone.
>
> On the other hand, P2P software could be leveraged to download video
> files
> during off-peak hours on the network. All it takes is some cooperation
> between
> P2P software developers and ISPs so that you have P2P clients which can
> be told
> to lay off during peak hours, or when they want something from the other
> side
> of a congested peering circuit. Better yet, the ISP's P2P manager could
> arrange
> for one full copy of that file to get across the congested peering
> circuit during
> the time period most favorable for that single circuit, then distribute
> elsewhere.
>
> --Michael Dillon
>
> As far as I am concerned the killer application for IP multicast is
> *NOT* video,
> it's market data feeds from NYSE, NASDAQ, CBOT, etc.
>
> _______________________________________________
> NANOG mailing list
> ***@nanog.org
> http://mailman.nanog.org/mailman/listinfo/nanog
>
TJ
2008-04-22 12:26:12 UTC
Permalink
"IP multicast does not help you when you have 1000 subscribers all pulling
in 1000 unique streams. In the 1960's it was reasonable to think that you
could deliver the same video to all consumers because everybody was the same
in one big melting pot. But that day is long gone."

... well multicast could be used - one stream for each of the "500 channels"
or whatever, and the time-shifting could be done on the recipients' sides
... just like broadcast TV + DVR today ... as long as we aren't talking
about adding place-shifting (a la SlingBox) also! The market (or, atleast in
the short-mid term - the provider :) ) would decide on that.


/TJ


> -----Original Message-----
> From: ***@bt.com [mailto:***@bt.com]
> Sent: Tuesday, April 22, 2008 6:34 AM
> To: ***@nanog.org
> Subject: Re: [Nanog] ATT VP: Internet to hit capacity by 2010
>
>
> > > I think you're too high there! MPEG2 SD is around 4-6Mbps,
> > MPEG4 SD is
> > > around 2-4Mbps, MPEG4 HD is anywhere from 8 to 20Mbps, depending on
> > > how much wow factor the broadcaster is trying to give.
> >
> > Nope, ATSC is 19 (more accurately 19.28) megabits per second.
>
> So why would anyone plug an ATSC feed directly into the Internet?
> Are there any devices that can play it other than a TV set?
> Why wouldn't a video services company transcode it to MPEG4 and
> transmit that?
>
> I can see that some cable/DSL companies might transmit ATSC to
> subscribers
> but they would also operate local receivers so that the traffic never
> touches their core. Rather like what a cable company does today with TV
> receivers in their head ends.
>
> All this talk of exafloods seems to ignore the basic economics of
> IP networks. No ISP is going to allow subscribers to pull in 8gigs
> per day of video stream. And no broadcaster is going to pay for the
> bandwidth needed to pump out all those ATSC streams. And nobody is
> going to stick IP multicast (and multicast peering) in the core just
> to deal with video streams to people who leave their TV on all day
> whether
> they are at home or not.
>
> At best you will see IP multicast on a city-wide basis in a single
> ISP's network. Also note that IP multicast only works for live
> broadcast
> TV. In today's world there isn't much of that except for news.
> Everything
> else is prerecorded and thus it COULD be transmitted at any time. IP
> multicast
> does not help you when you have 1000 subscribers all pulling in 1000
> unique
> streams. In the 1960's it was reasonable to think that you could
> deliver
> the
> same video to all consumers because everybody was the same in one big
> melting
> pot. But that day is long gone.
>
> On the other hand, P2P software could be leveraged to download video
> files
> during off-peak hours on the network. All it takes is some cooperation
> between
> P2P software developers and ISPs so that you have P2P clients which can
> be told
> to lay off during peak hours, or when they want something from the
> other
> side
> of a congested peering circuit. Better yet, the ISP's P2P manager could
> arrange
> for one full copy of that file to get across the congested peering
> circuit during
> the time period most favorable for that single circuit, then distribute
> elsewhere.
>
> --Michael Dillon
>
> As far as I am concerned the killer application for IP multicast is
> *NOT* video,
> it's market data feeds from NYSE, NASDAQ, CBOT, etc.
>
> _______________________________________________
> NANOG mailing list
> ***@nanog.org
> http://mailman.nanog.org/mailman/listinfo/nanog
Bruce Curtis
2008-04-22 14:05:58 UTC
Permalink
p2p isn't the only way to deliver content overnight, content could
also be delivered via multicast overnight.

http://www.intercast.com/Eng/Index.asp

http://kazam.com/Eng/About/About.jsp



On Apr 22, 2008, at 5:33 AM, <***@bt.com> wrote:

>
>>> I think you're too high there! MPEG2 SD is around 4-6Mbps,
>> MPEG4 SD is
>>> around 2-4Mbps, MPEG4 HD is anywhere from 8 to 20Mbps, depending on
>>> how much wow factor the broadcaster is trying to give.
>>
>> Nope, ATSC is 19 (more accurately 19.28) megabits per second.
>
> So why would anyone plug an ATSC feed directly into the Internet?
> Are there any devices that can play it other than a TV set?
> Why wouldn't a video services company transcode it to MPEG4 and
> transmit that?
>
> I can see that some cable/DSL companies might transmit ATSC to
> subscribers
> but they would also operate local receivers so that the traffic never
> touches their core. Rather like what a cable company does today with
> TV
> receivers in their head ends.
>
> All this talk of exafloods seems to ignore the basic economics of
> IP networks. No ISP is going to allow subscribers to pull in 8gigs
> per day of video stream. And no broadcaster is going to pay for the
> bandwidth needed to pump out all those ATSC streams. And nobody is
> going to stick IP multicast (and multicast peering) in the core just
> to deal with video streams to people who leave their TV on all day
> whether
> they are at home or not.
>
> At best you will see IP multicast on a city-wide basis in a single
> ISP's network. Also note that IP multicast only works for live
> broadcast
> TV. In today's world there isn't much of that except for news.
> Everything
> else is prerecorded and thus it COULD be transmitted at any time. IP
> multicast
> does not help you when you have 1000 subscribers all pulling in 1000
> unique
> streams. In the 1960's it was reasonable to think that you could
> deliver
> the
> same video to all consumers because everybody was the same in one big
> melting
> pot. But that day is long gone.
>
> On the other hand, P2P software could be leveraged to download video
> files
> during off-peak hours on the network. All it takes is some cooperation
> between
> P2P software developers and ISPs so that you have P2P clients which
> can
> be told
> to lay off during peak hours, or when they want something from the
> other
> side
> of a congested peering circuit. Better yet, the ISP's P2P manager
> could
> arrange
> for one full copy of that file to get across the congested peering
> circuit during
> the time period most favorable for that single circuit, then
> distribute
> elsewhere.
>
> --Michael Dillon
>
> As far as I am concerned the killer application for IP multicast is
> *NOT* video,
> it's market data feeds from NYSE, NASDAQ, CBOT, etc.
>
> _______________________________________________
> NANOG mailing list
> ***@nanog.org
> http://mailman.nanog.org/mailman/listinfo/nanog
>


---
Bruce Curtis ***@ndsu.edu
Certified NetAnalyst II 701-231-8527
North Dakota State University
Marc Manthey
2008-04-22 14:15:19 UTC
Permalink
Am 22.04.2008 um 16:05 schrieb Bruce Curtis:

> p2p isn't the only way to deliver content overnight, content could
> also be delivered via multicast overnight.
>
> http://www.intercast.com/Eng/Index.asp
>
> http://kazam.com/Eng/About/About.jsp


hmm sorry i did not get it IMHO multicast ist uselese for VOD ,
correct ?


marc
Bruce Curtis
2008-04-22 14:49:53 UTC
Permalink
On Apr 22, 2008, at 9:15 AM, Marc Manthey wrote:

> Am 22.04.2008 um 16:05 schrieb Bruce Curtis:
>
>> p2p isn't the only way to deliver content overnight, content could
>> also be delivered via multicast overnight.
>>
>> http://www.intercast.com/Eng/Index.asp
>>
>> http://kazam.com/Eng/About/About.jsp
>
>
> hmm sorry i did not get it IMHO multicast ist uselese for VOD ,
> correct ?
>
>
> marc


Michael said the same thing "Also note that IP multicast only works
for live broadcast TV." and then mentioned that p2p could be used to
download content during off-peak hours.

Kazam is a beta test that uses Intercast's technology to download
content overnight to a users PC via multicast.

My point was p2p isn't the only way to deliver content overnight,
multicast could also be used to do that, and in fact at least one
company is exploring that option.

The example seemed to fit in well with the other examples in the
the thread that mentioned TiVo type devices recording content for
later viewing on demand.

I agree that multicast can be used for live TV and others have
mentioned the multicasting of the BBC and www.ostn.tv is another
example of live multicasting. However since TiVo type devices today
record broadcast content for later viewing on demand there could
certainly be devices that record multicast content for later viewing
on demand.



---
Bruce Curtis ***@ndsu.edu
Certified NetAnalyst II 701-231-8527
North Dakota State University
Adrian Chadd
2008-04-22 17:13:41 UTC
Permalink
On Tue, Apr 22, 2008, Marc Manthey wrote:

> hmm sorry i did not get it IMHO multicast ist uselese for VOD ,
> correct ?

As a delivery mechanism to end-users? Sure.

As a way of feeding content to edge boxes which then serve VOD?
Maybe not so useless. But then, its been years since I toyed with
IP over satellite to feed ${STUFF}.. :)



Adrian
Alex Thurlow
2008-04-21 21:26:22 UTC
Permalink
Chris Adams wrote:
> Once upon a time, Steve Gibbard <***@gibbard.org> said:
>> iTunes video, which looks perfectly acceptable on my old NTSC TV, is .75
>> gigabytes per viewable hour. I think HDTV is somewhere around 8 megabits
>> per second (if I'm remembering correctly; I may be wrong about that),
>> which would translate to one megabyte per second, or 3.6 gigabytes per
>> hour.
>
> You're a little low. ATSC (the over-the-air digital broadcast format)
> is 19 megabits per second or 8.55 gigabytes per hour. My TiVo probably
> records 12-20 hours per day (I don't watch all that of course), often
> using two tuners (so up to 38 megabits per second). That's not all HD
> today of course, but the percentage that is HD is going up.
>
> 1.1 terabytes of ATSC-level HD would be a little over 4 hours a day. If
> you have a family with multiple TVs, that's easy to hit.
>
> That also assumes that we get 40-60 megabit connections (2-3 ATSC format
> channels) that can sustain that level of traffic to the household with
> widespread deployment in 2 years and that the "average" household hooks
> it up to their TVs.
>

I'm going to have to say that that's much higher than we're actually
going to see. You have to remember that there's not a ton of
compression going on in that. We're looking to start pushing HD video
online, and our intial tests show that 1.5Mbps is plenty to push HD
resolutions of video online. We won't necessarily be doing 60 fps or
full quality audio, but "HD" doesn't actually define exactly what it's
going to be.

Look at the HD offerings online today and I think you'll find that
they're mostly 1-1.5 Mbps. TV will stay much higher quality than that,
but if people are watching from their PCs, I think you'll see much more
compression going on, given that the hardware processing it has a lot
more horsepower.


--
Alex Thurlow
Technical Director
Blastro Networks
Frank Bulk - iNAME
2008-04-22 01:35:45 UTC
Permalink
I've found it interesting that those who do Internet TV (re)define HD in a
way that no one would consider HD anymore except the provider. =)

In the news recently has been some complaints about Comcast's HD TV.
Comcast has been (selectively) fitting 3 MPEG-2 HD streams in a 6 MHz
carrier (38 Mbps = 12.6 Mbps) and customers aren't happy with that. I'm not
sure how the average consumer will see 1.5 Mbps for HD video as sufficient
unless it's QVGA.

Frank

-----Original Message-----
From: Alex Thurlow [mailto:***@blastro.com]
Sent: Monday, April 21, 2008 4:26 PM
To: ***@nanog.org
Subject: Re: [Nanog] ATT VP: Internet to hit capacity by 2010

<snip>

I'm going to have to say that that's much higher than we're actually
going to see. You have to remember that there's not a ton of
compression going on in that. We're looking to start pushing HD video
online, and our intial tests show that 1.5Mbps is plenty to push HD
resolutions of video online. We won't necessarily be doing 60 fps or
full quality audio, but "HD" doesn't actually define exactly what it's
going to be.

Look at the HD offerings online today and I think you'll find that
they're mostly 1-1.5 Mbps. TV will stay much higher quality than that,
but if people are watching from their PCs, I think you'll see much more
compression going on, given that the hardware processing it has a lot
more horsepower.


--
Alex Thurlow
Technical Director
Blastro Networks
Marshall Eubanks
2008-04-22 12:10:00 UTC
Permalink
On Apr 21, 2008, at 9:35 PM, Frank Bulk - iNAME wrote:

> I've found it interesting that those who do Internet TV (re)define
> HD in a
> way that no one would consider HD anymore except the provider. =)
>

The FCC did not appear to set a bit rate specification for HD
Television.

The ATSC standard (A-53 part 4) specifies aspect ratios and pixel
formats and frame rates, but not
bit rates.

So AFAICT, no redefinition is necessary. If you are doing (say) 720 x
1280 at 30 fps, you
can call it HD, regardless of your bit rate. If you can find somewhere
where the standard
says otherwise, I would like to know about it.


> In the news recently has been some complaints about Comcast's HD TV.
> Comcast has been (selectively) fitting 3 MPEG-2 HD streams in a 6 MHz
> carrier (38 Mbps = 12.6 Mbps) and customers aren't happy with that.
> I'm not
> sure how the average consumer will see 1.5 Mbps for HD video as
> sufficient
> unless it's QVGA.

Well, not with a 15+ year old standard like MPEG-2. (And, of course,
HD is a set of
pixel formats that specifically does not include QVGA.)

I have had video professionals go "wow" at H.264 dual pass 720 p
encodings at 2 Mbps, so it can be done. The real
question is, how often do you see artifacts ? And, how much does the
user care ? Modern encodings
at these bit rates tend to provide very good encodings of static
scenes. As the on-screen action increases, so
does the likelihood of artifacts, so selection of bit rate depends I
think on user expectations and the typical content being down.
(As an aside, I see lots of artifacts on my at-home Cable HD, but I
don't know their bandwidth allocation.)

Regards
Marshall


>
>
> Frank
>
> -----Original Message-----
> From: Alex Thurlow [mailto:***@blastro.com]
> Sent: Monday, April 21, 2008 4:26 PM
> To: ***@nanog.org
> Subject: Re: [Nanog] ATT VP: Internet to hit capacity by 2010
>
> <snip>
>
> I'm going to have to say that that's much higher than we're actually
> going to see. You have to remember that there's not a ton of
> compression going on in that. We're looking to start pushing HD video
> online, and our intial tests show that 1.5Mbps is plenty to push HD
> resolutions of video online. We won't necessarily be doing 60 fps or
> full quality audio, but "HD" doesn't actually define exactly what it's
> going to be.
>
> Look at the HD offerings online today and I think you'll find that
> they're mostly 1-1.5 Mbps. TV will stay much higher quality than
> that,
> but if people are watching from their PCs, I think you'll see much
> more
> compression going on, given that the hardware processing it has a lot
> more horsepower.
>
>
> --
> Alex Thurlow
> Technical Director
> Blastro Networks
>
>
> _______________________________________________
> NANOG mailing list
> ***@nanog.org
> http://mailman.nanog.org/mailman/listinfo/nanog
>
>
> _______________________________________________
> NANOG mailing list
> ***@nanog.org
> http://mailman.nanog.org/mailman/listinfo/nanog
Paul Ferguson
2008-04-21 17:42:44 UTC
Permalink
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

- -- Sean Donelan <***@donelan.com> wrote:

>The rest of the story?
>
>http://www.usatoday.com/tech/products/services/2008-04-20-internet-broadba
>nd-traffic-jam_N.htm
>
> By 2010, the average household will be using 1.1 terabytes (roughly
> equal to 1,000 copies of the Encyclopedia Britannica) of bandwidth a
> month, according to an estimate by the Internet Innovation Alliance in
> Washington, D.C. At that level, it says, 20 homes would generate more
> traffic than the entire Internet did in 1995.


Hmmm. Who exactly is "The Internet Innovation Alliance"?

Unfortunately, their website does not say:

http://www.internetinnovation.org/

But given the content there (generous references to the upcoming
Internet "exaflood" apocalypse), I would guess they are either
compromised of telcos and ISPs or telco lobbyists or both. :-)

It would be interesting to know (the rest of the story...)

- - ferg

-----BEGIN PGP SIGNATURE-----
Version: PGP Desktop 9.6.3 (Build 3017)

wj8DBQFIDNIQq1pz9mNUZTMRAq6fAKCCgypsomFy7NmMbLwOjBZMZ1b9fwCfUFuc
kT6BoIXhTsN0ulOvFrWlXNg=
=u65U
-----END PGP SIGNATURE-----

--
"Fergie", a.k.a. Paul Ferguson
Engineering Architecture for the Internet
fergdawg(at)netzero.net
ferg's tech blog: http://fergdawg.blogspot.com/
Sean Donelan
2008-04-21 18:53:41 UTC
Permalink
On Mon, 21 Apr 2008, Paul Ferguson wrote:
> But given the content there (generous references to the upcoming
> Internet "exaflood" apocalypse), I would guess they are either
> compromised of telcos and ISPs or telco lobbyists or both. :-)

Thank goodness anti-virus companies never hype security threats or
fund "Internet safety" organizations :-)


> It would be interesting to know (the rest of the story...)

Everyone agrees having more data would be useful. It would be great
if someone could collect the available data, and get more data from
multiple providers (universities, small, large, for-profit, non-profit,
etc), and publish something.
Paul Ferguson
2008-04-21 18:05:48 UTC
Permalink
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

- -- "Paul Ferguson" <***@netzero.net> wrote:

>Hmmm. Who exactly is "The Internet Innovation Alliance"?
>
>Unfortunately, their website does not say:
[...]

As someone pointed out to me privately, this URL outlines
it's membership:

http://www.internetinnovation.org/AboutUs/Members/tabid/59/Default.aspx

Not sure how they found it, since there is no "About Us" link on
the main page. :-)

- - ferg

-----BEGIN PGP SIGNATURE-----
Version: PGP Desktop 9.6.3 (Build 3017)

wj8DBQFIDNdpq1pz9mNUZTMRAvo6AKCIje224+1TOsLCgbXL8mPJ3fRrdgCffnRX
B4Wba6bOm/enwEico/R9LWo=
=NEjI
-----END PGP SIGNATURE-----



--
"Fergie", a.k.a. Paul Ferguson
Engineering Architecture for the Internet
fergdawg(at)netzero.net
ferg's tech blog: http://fergdawg.blogspot.com/o
Henry Linneweh
2008-04-21 18:58:46 UTC
Permalink
Internet Alliance
http://www.commoncause.org/site/pp.asp?c=dkLNK1MQIwG&b=1498631
http://www.internetinnovation.org/
http://www.internetinnovation.org/AboutUs/Members/tabid/59/Default.aspx

-Henry

----- Original Message ----
From: Sean Donelan <***@donelan.com>
To: ***@nanog.org
Sent: Monday, April 21, 2008 11:53:41 AM
Subject: Re: [Nanog] ATT VP: Internet to hit capacity by 2010

On Mon, 21 Apr 2008, Paul Ferguson wrote:
> But given the content there (generous references to the upcoming
> Internet "exaflood" apocalypse), I would guess they are either
> compromised of telcos and ISPs or telco lobbyists or both. :-)

Thank goodness anti-virus companies never hype security threats or
fund "Internet safety" organizations :-)


> It would be interesting to know (the rest of the story...)

Everyone agrees having more data would be useful. It would be great
if someone could collect the available data, and get more data from
multiple providers (universities, small, large, for-profit, non-profit,
etc), and publish something.
Scott Weeks
2008-04-21 19:01:35 UTC
Permalink
------- ***@iecc.com wrote: ------------
The most interesting part is the author bios at the end:

Bruce Mehlman was assistant secretary of commerce under President
Bush. Larry Irving was assistant secretary of commerce under
President Bill Clinton. They are co-chairmen of the Internet
Innovation Alliance, a coalition of individuals, businesses and
nonprofit groups that includes telecommunications companies.
-------------------------------------------------



It also includes AT&T as well as schloads ;-) of companies that sell stuff to them.

scott















-------
Joe Greco
2008-04-21 20:16:33 UTC
Permalink
> Steve Gibbard wrote:
> > Maybe I just don't spend enough time around the "leave the TV on all day"
> > demographic. Is that a realistic number? Is there something bigger than
> > HDTV video that ATT expects people to start downloading?
>
> I would not be surprised if many households watch more than 10hrs of TV
> per day. My trusty old series 2 TiVo often records 5-8hrs of TV per day,
> even if I don't watch any of it.
>
> Right now I can get 80 or so channels of basic cable, and who knows how
> many of Digital Cable/Satellite for as many TVs as I can fit in my house
> without the Internet buckling under the pressure. I assume AT&T is just
> saying "We use this pipe for TV and Internet, hence all TV is now
> considered Internet traffic"? How many people are REALLY going to be
> pulling 10hrs of HD or even SD TV across their Internet connection,
> rather than just taking what is Multicasted from a Satellite base
> station by their TV service provider? Is there something significant
> about AT&T's model (other than the VDSL over twisted pair, rather than
> coax/fiber to the prem) that makes them more afraid than Comcast,
> Charter or Cox?
>
> Maybe I'm just totally missing something - Wouldn't be the first time.
> Why would TV of any sort even touch the 'Internet'. And, no, YouTube is
> not "TV" as far as I'm concerned.

The real problem is that this technology is just in its infancy.

Right now, our TiVo's may pull in many hours a day of TV to watch. In my
case, it's from satellite. In yours, maybe from a cable company. That's
fine, that's manageable, and the technology used to move the signal from
the broad/multicast point to your settop box is only vaguely relevant. It
is not unicast.

There is, however, an opportunity here for a fundamental change in the
distribution model of video, and this should terrify any network operator.
That would be an evolution towards unicast, particularly off-net unicast.

I posted a message on Oct 10 of last year suggesting one potential model
for evolution of video services. We're seeing the market target narrower
segments of the viewing public, and if this continues, we may well see
some "channel" partner with TiVo to provide on-demand access to remote
content over the Internet. That could well lead to a model where you would
have TiVo speculatively preloading content, and potentially vast amounts of
it. Or, worse yet, the popularity of YouTube suggests that at some point,
we may end up with a new "local webserver service" on the next generation
Microsoft Whoopta OS that was capable of publication of video from the
local PC, maybe vaguely similar to BitTorrent under the hood, allowing for
a much higher bandwidth podcast-like service where your TiVo (and everyone
else's) is downloading video slowly from lots of different sources.

... JG
--
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.
Scott Weeks
2008-04-21 20:44:15 UTC
Permalink
--- ***@neustar.biz wrote:
From: "Williams, Marc" <***@neustar.biz>

http://www.worldmulticast.com/marketsummary.html
----------------------------------------------



We should be careful when discussing IPTV traffic issues. Is it inter-AS or intra-AS traffic? I'd imagine the beginning of the IPTV roll-out will be intra-AS traffic, rather than inter-AS and global. We're looking into starting up with it on a small scale to work all the bugs out before expanding the customer base (like AT&T did in San Antonio) but no IPTV traffic will leave our network.

AT&T is rapidly expanding U-verse (http://www.reuters.com/article/internetNews/idUSN2826839220070328 and http://seekingalpha.com/article/30657-project-lightspeed-at-t-s-iptv-architecture) and perhaps they've seen the BW issues better than others, thus the FUD by their vice president of legislative affairs at the Westminster eForum. Perhaps in his PoV AT&T's current network infrastructure is the Internet's current network architecture.

scott




































---------------------
Brandon Butterworth
2008-04-22 12:06:22 UTC
Permalink
> So why would anyone plug an ATSC feed directly into the Internet?

Because we can. One day ISPs might do multicast and it might become
cheap enough to deliver to the home. If we don't then they probably
will never bother fixing those two problems

I've been multicasting the BBCs channels in the UK since 2004. The full
rate are mostly used by NOCs with our news on their projectors, we have
lower rate h264, WM and Real for people testing multicast over current
ADSL. The aim is by 2012 to be able to do all our Olympics sports in HD
(a channel per simultaneous event rather than the usual just one with
highlights of each) something we can't do on DTT (= ATSC) due to lack
of spectrum (there's enough but it's being sold for non TV use after
analogue switch off)

> Are there any devices that can play it other than a TV set?

Sure, STB for TV and VLC etc for most OS. It's trivial

> No ISP is going to allow subscribers to pull in 8gigs
> per day of video stream. And no broadcaster is going to pay for the
> bandwidth needed to pump out all those ATSC streams.

That's because they don't have a viable business model (unlimited
use...). Cable companies are moving to IP, they already carry
it from their core to the home just the transport is changing.

> And nobody is
> going to stick IP multicast (and multicast peering) in the core just
> to deal with video streams to people who leave their TV on all day
> whether they are at home or not.

When people do it unicast regardless then not doing multicast is silly

> At best you will see IP multicast on a city-wide basis in a single
> ISP's network.

Unlikely, too much infrastructure and not all content is available
locally

> Also note that IP multicast only works for live broadcast TV.

See Sky Movies for a simulation of multicast VoD

> IP multicast
> does not help you when you have 1000 subscribers all pulling in 1000
> unique streams.

True but the 10000000 watching BBC1 may as well be multicast, at
least you save a bit.

> In the 1960's it was reasonable to think that you could deliver the
> same video to all consumers because everybody was the same in one big
> melting pot. But that day is long gone.

Evidence is a lot of people still like to vegetate in front of a
tv rather than hunt their content. Once they're all dead we'll
find out if linear TV is still viable, by then IPv6 roll out may
have completed too.

> On the other hand, P2P software could be leveraged to download video
> files during off-peak hours on the network.

Sure but P2P isn't a requirement for that and currently saves you no
money (UK ADSL wholesale model) over unicast. If people are taking
random content you won't be able to predict and send it in advance. If
you can predict then you can multicast it and save some transport cost
vs P2P/unicast

> Better yet, the ISP's P2P manager could arrange
> for one full copy of that file to get across the congested peering
> circuit during
> the time period most favorable for that single circuit, then distribute
> elsewhere.

Or they could just run an http cache and save a lot more traffic
and not have to rely on P2P apps playing nicely.

Apologies for length, just "no" seemed too rude

brandon
Joe Greco
2008-04-22 13:02:06 UTC
Permalink
> All this talk of exafloods seems to ignore the basic economics of
> IP networks. No ISP is going to allow subscribers to pull in 8gigs
> per day of video stream. And no broadcaster is going to pay for the
> bandwidth needed to pump out all those ATSC streams. And nobody is
> going to stick IP multicast (and multicast peering) in the core just
> to deal with video streams to people who leave their TV on all day
> whether they are at home or not.

The floor is littered with the discarded husks of policies about what ISP's
are going to allow or disallow. "No servers", "no connection sharing",
"web browsing only," "no voip," etc. These typically last only as long as
the errant assumptions upon which they're based remain somewhat viable.
For example, when NAT gateways and Internet Connection Sharing became
widely available, trying to prohibit connection sharing went by the wayside.

8GB/day is less than a single megabit per second, and with ISP's selling
ultra high speed connections (we're now able to get 7 or 15Mbps), an ISP
might find it difficult to defend why they're selling a premium 15Mbps
service on which a user can't get 1/15th of that.

> At best you will see IP multicast on a city-wide basis in a single
> ISP's network. Also note that IP multicast only works for live broadcast
> TV. In today's world there isn't much of that except for news.

Huh? Why does IP multicast only work for that?

> Everything else is prerecorded and thus it COULD be transmitted at
> any time. IP multicast does not help you when you have 1000 subscribers
> all pulling in 1000 unique streams.

Yes, that's potentially a problem. That doesn't mean that multicast can
not be leveraged to handle prerecorded material, but it does suggest that
you could really use a TiVo-like device to make best use. A fundamental
change away from "live broadcast" and streaming out a show in 1:1 realtime,
to a model where everything is spooled onto the local TiVo, and then
watched at a user's convenience.

We don't have the capacity at the moment to really deal with 1000 subs all
pulling in 1000 unique streams, but the likelihood is that we're not going
to see that for some time - if ever.

What seems more likely is that we'll see an evolution of more specialized
offerings, possibly supplementing or even eventually replacing the tiered
channel package offerings of your typical cable company, since it's pretty
clear that a-la-carte channel selection isn't likely to happen soon.

That may allow some "less popular" channels to come into being. I happen
to like holding up SciFi as an example, because their current operations
are significantly different than originally conceived, and they're now
producing significant quantities of their own original material. It's
possible that we could see a much larger number of these sorts of ventures
(which would terrify legacy television networks even further).

The biggest challenge that I would expect from a network point of view is
the potential for vast amounts of decentralization. For example, there's
low-key stuff such as the "Star Trek: Hidden Frontier" series of fanfic-
based video projects. There are almost certainly enough fans out there
that you'd see a small surge in viewership if the material was more
readily accessible (read that as: automatically downloaded to your TiVo).
That could encourage others to do the same in more quantity. These are
all low-volume data sources, and yet taken as a whole, they could
represent a fairly difficult problem were everyone to be doing it. It is
not just tech geeks that are going to be able produce video, as the stuff
becomes more accessible (see: YouTube), we may see stuff like mini soap
operas, home & garden shows, local sporting events, local politics, etc.

I'm envisioning a scenario where we may find that there are a few tens of
thousands of PTA meetings each being uploaded routinely onto the home PC's
of whoever recorded the local meeting, and then made available to the
small number of interested parties who might then watch, where (0<N<20).

If that kind of thing happens, then we're going to find that there's a
large range of projects that have potential viewership landing anywhere
between this example and that of the specialty broadcast cable channels,
and the question that is relevant to network operators is whether there's
a way to guide this sort of thing towards models which are less harmful
to the network. I don't pretend to have the answers to this, but I do
feel reasonably certain that the success of YouTube is not a fluke, and
that we're going to see more, not less, of this sort of thing.

> As far as I am concerned the killer application for IP multicast is
> *NOT* video, it's market data feeds from NYSE, NASDAQ, CBOT, etc.

You can go compare the relative successes of Yahoo! Finance and YouTube.

While it might be nice to multicast that sort of data, it's a relative
trickle of data, and I'll bet that the majority of users have not only
not visited a market data site this week, but have actually never done
so.

... JG
--
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.
Alexander Harrowell
2008-04-22 13:27:22 UTC
Permalink
On Tue, Apr 22, 2008 at 2:02 PM, Joe Greco <***@ns.sol.net> wrote:

>
> > As far as I am concerned the killer application for IP multicast is
> > *NOT* video, it's market data feeds from NYSE, NASDAQ, CBOT, etc.
>
> You can go compare the relative successes of Yahoo! Finance and YouTube.
>
> While it might be nice to multicast that sort of data, it's a relative
> trickle of data, and I'll bet that the majority of users have not only
> not visited a market data site this week, but have actually never done
> so.


As if most financial (and other mega-dataset) data was on consumer Web
sites. Think pricing feeds off stock exchange back-office systems.
m***@bt.com
2008-04-22 14:41:27 UTC
Permalink
> > IP multicast does not help you when you have 1000 subscribers
> > all pulling in 1000 unique streams.
>
> Yes, that's potentially a problem. That doesn't mean that
> multicast can not be leveraged to handle prerecorded
> material, but it does suggest that you could really use a
> TiVo-like device to make best use.

You mean a computer? Like the one that runs file-sharing
clients? Or Squid? Or an NNTP server?

Is video so different from other content? Considering the
volume of video that currently traverses P2P networks I really
don't see that there is any need for an IP multicast solution
except for news feeds and video conferencing.

> What seems more likely is that we'll see an evolution of more
> specialized offerings,

Yes. The overall trend has been to increasingly split the market
into smaller slivers with additional choices being added and older
ones still available. During the shift to digital broadcasting in
the UK, we retained the free-to-air services with more channels
than we had on analog. Satellite continued to grow in diversity and
now there is even a Freesat service coming online. Cable TV is still
there although now it is usually bundled with broadband Internet as
well as telephone service. You can access the Internet over your mobile
phone using GPRS, or 3G and wifi is spreading slowly but surely.

But one thing that does not change is the number of hours in the day.
Every service competes for scarce attention spans, and a more-or-less
fixed portion of people's disposable income. Based on this, I don't
expect to see any really huge changes.

> That may allow some "less popular" channels to come into
> being.

YouTube et al.

> I happen to like holding up SciFi as an example,
> because their current operations are significantly different
> than originally conceived, and they're now producing
> significant quantities of their own original material.

The cost to film and to edit video content has dropped
dramatically over the past decade. The SciFi channel is the
tip of a very big iceberg. And what about immigrants? Even
50 years ago, immigrants to the USA joined a bigger melting
pot culture and integrated slowly but surely. Nowadays,
they have cheap phonecalls back home, the same Internet content
as the folks back home, and P2P to get the TV shows and movies
that people are watching back home. How is any US channel-based
system going to handle that diversity and variety?

> There are almost certainly enough fans out
> there that you'd see a small surge in viewership if the
> material was more readily accessible (read that as:
> automatically downloaded to your TiVo).

Is that so different from P2P video? In any case, the Tivo model
is limited to the small amount of content, all commercial, that
they can classify so that Tivo downloads the right stuff. P2P
allows you to do the classification, but it is still automatically
downloaded while you sleep.

> I'm envisioning a scenario where we may find that there are a
> few tens of thousands of PTA meetings each being uploaded
> routinely onto the home PC's of whoever recorded the local
> meeting, and then made available to the small number of
> interested parties who might then watch, where (0<N<20).

Any reason why YouTube can't do this today? Remember the human
element. People don't necessarily study the field of possibilities
and them make the optimal choice. Usually, they just pick what is
familiar as long as it is good enough. Click onto a YouTube video,
then click the pause button, then go cook supper. After you eat,
go back and press the play button. To the end user, this is much
the same experience as P2P, or programming a PVR to record an
interesting program that broadcasts at an awkward time.

> I don't pretend to have the answers
> to this, but I do feel reasonably certain that the success of
> YouTube is not a fluke, and that we're going to see more, not
> less, of this sort of thing.

Agreed.

> > As far as I am concerned the killer application for IP multicast is
> > *NOT* video, it's market data feeds from NYSE, NASDAQ, CBOT, etc.
>
> You can go compare the relative successes of Yahoo! Finance
> and YouTube.

Actually, Yahoo! Finance is only one single subscriber to these market
data feeds. My company happens to run an IP network supporting global
multicast, which delivers the above market data feeds, and many others,
to over 10,000 customers in over 50 countries. Market data feeds are not
a mass market consumer product but they are a realtime firehose of data
that people want to receive right now and not a microsecond later. It is
not unusual for our sales team to receive RFPs that specify latency
times
that are faster than the speed of light. The point is that IP multicast
is probably the only way to deliver this data because we cannot afford
the
additional latency to send packets into a server and back again. I.e. a
CDN
type of solution won't work.

It's not only nice to multicast this data, it is mission critical.
People are risking millions of dollars every hour based on the data in
these
feeds. The way it usually works (pioneered by NYSE I believe) is that
they send two copies of every packet through two separate multicast
trees. If there is too much time differential between the arrival of the
two
packets then the service puts up a warning flag so that the traders know
their data
is stale. Add a few more milliseconds and it shuts down entirely because
the data
is now entirely useless. When latency is this important, those copies
going
to multiple subscribers have to be copied in the packet-forwarding
device,
i.e. router supporting IP multicast.

Of course consumer video doesn't have the same strict latency
requirements,
therefore my opinion that IP multicast is unneeded complexity. Use the
best
tool for the job.

--Michael Dillon
Joe Greco
2008-04-22 13:44:49 UTC
Permalink
> On Tue, Apr 22, 2008 at 2:02 PM, Joe Greco <***@ns.sol.net> wrote:
> > > As far as I am concerned the killer application for IP multicast is
> > > *NOT* video, it's market data feeds from NYSE, NASDAQ, CBOT, etc.
> >
> > You can go compare the relative successes of Yahoo! Finance and YouTube.
> >
> > While it might be nice to multicast that sort of data, it's a relative
> > trickle of data, and I'll bet that the majority of users have not only
> > not visited a market data site this week, but have actually never done
> > so.
>
> As if most financial (and other mega-dataset) data was on consumer Web
> sites. Think pricing feeds off stock exchange back-office systems.

Oh, you got my point. Good. :-)

This isn't a killer application for IP multicast, at least not on the
public Internet. High volume bits that are not busily traversing a hundred
thousand last-mile residential connections are probably not the bits that
are going to pose a serious challenge for network operators, or at least,
that's my take on things.

... JG
--
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.
Joe Greco
2008-04-22 16:47:02 UTC
Permalink
> > > IP multicast does not help you when you have 1000 subscribers
> > > all pulling in 1000 unique streams.
> >
> > Yes, that's potentially a problem. That doesn't mean that
> > multicast can not be leveraged to handle prerecorded
> > material, but it does suggest that you could really use a
> > TiVo-like device to make best use.
>
> You mean a computer? Like the one that runs file-sharing
> clients?

Like the one that nobody really wants to watch large quantities of
television on? Especially now that it's pretty common to have large,
flat screen TV's, and watching TV even on a 24" monitor feels like a
throwback to the '80's?

How about the one that's shaped like a TiVo and has a built-in remote
control, sane operating software, can be readily purchased and set up
by a non-techie, and is known to work well?

I remember all the fuss about how people would be making phone calls
using VoIP and their computers. Yet most of the time, I see VoIP
consumers transforming VoIP to legacy POTS, or VoIP hardphones, or
stuff like that. I'm going to make a guess and take a stab and say
that people are going to prefer to keep their TV's somewhat more TV-
like.

> Or Squid? Or an NNTP server?

Speaking as someone who's run the largest Squid and news server
deployments in this region, I think I can safely say - no.

It's certainly fine to note that both Squid and NNTP have elements that
deal with transferring large amounts of data, and that fundamentally
similar elements could play a role in the distribution model, but I see
no serious role for those at the set-top level.

> Is video so different from other content? Considering the
> volume of video that currently traverses P2P networks I really
> don't see that there is any need for an IP multicast solution
> except for news feeds and video conferencing.

Wow. Okay. I'll just say, then, that such a position seems a bit naive,
and I suspect that broadband networks are going to be crying about the
sheer stresses on their networks, when moderate numbers of people begin
to upload videos into their TiVo, which then share them with other TiVo's
owned by their friends around town, or across an ocean, while also
downloading a variety of shows from a dozen off-net sources, etc.

I really see the any-to-any situation as being somewhat hard on networks,
but if you believe that not to be the case, um, I'm listening, I guess.

> > What seems more likely is that we'll see an evolution of more
> > specialized offerings,
>
> Yes. The overall trend has been to increasingly split the market
> into smaller slivers with additional choices being added and older
> ones still available.

Yes, but that's still a broadcast model. We're talking about an evolution
(potentially _r_evolution) of technology where the broadcast model itself
is altered.

> During the shift to digital broadcasting in
> the UK, we retained the free-to-air services with more channels
> than we had on analog. Satellite continued to grow in diversity and
> now there is even a Freesat service coming online. Cable TV is still
> there although now it is usually bundled with broadband Internet as
> well as telephone service. You can access the Internet over your mobile
> phone using GPRS, or 3G and wifi is spreading slowly but surely.

Yes.

> But one thing that does not change is the number of hours in the day.
> Every service competes for scarce attention spans,

Yes. However, some things that do change:

1) Broadband speeds continue to increase, making it possible for more
content to be transferred

2) Hard drives continue to grow, and the ability to store more, combined
with higher bit rates (HD, less artifact, whatever) means that more
bits can be transferred to fill the same amount of time

3) Devices such as TiVo are capable of downloading large amounts of material
on a speculative basis, even on days where #hrs-tv-watched == 0. I
suspect that this effect may be a bit worse as more diversity appears,
because instead of hitting stop during a 30-second YouTube clip, you're
now hitting delete 15 seconds into a 30-minute InterneTiVo'd show. I
bet I can clear out a few hours worth of not-that-great programming in
5 minutes...

> and a more-or-less
> fixed portion of people's disposable income. Based on this, I don't
> expect to see any really huge changes.

That's fair enough. That's optimistic (from a network operator's point
of view.) I'm afraid that such changes will happen, however.

> > That may allow some "less popular" channels to come into
> > being.
>
> YouTube et al.

The problem with that is that there's money to be had, and if you let
YouTube host your video, it's YouTube getting the juicy ad money. An
essential quality of the Internet is the ability to eliminate the
middleman, so even if YouTube has invented itself as a new middleman,
that's primarily because it is kind of a new thing, and we do not yet
have ways for the average user to easily serve video clips a different
way. That will almost certainly change.

> > I happen to like holding up SciFi as an example,
> > because their current operations are significantly different
> > than originally conceived, and they're now producing
> > significant quantities of their own original material.
>
> The cost to film and to edit video content has dropped
> dramatically over the past decade. The SciFi channel is the
> tip of a very big iceberg. And what about immigrants? Even
> 50 years ago, immigrants to the USA joined a bigger melting
> pot culture and integrated slowly but surely. Nowadays,
> they have cheap phonecalls back home, the same Internet content
> as the folks back home, and P2P to get the TV shows and movies
> that people are watching back home. How is any US channel-based
> system going to handle that diversity and variety?

Well, that's the point I'm making. It isn't, and we're going to see
SOMEONE look at this wonderful Internet thingy and see in it a way to
"solve" this problem, which is going to turn into an operational
nightmare as traffic loads increase, and a larger percentage of users
start to either try to use the bandwidth they're being "sold," or
actually demand it.

> > There are almost certainly enough fans out
> > there that you'd see a small surge in viewership if the
> > material was more readily accessible (read that as:
> > automatically downloaded to your TiVo).
>
> Is that so different from P2P video? In any case, the Tivo model
> is limited to the small amount of content, all commercial, that
> they can classify so that Tivo downloads the right stuff. P2P
> allows you to do the classification, but it is still automatically
> downloaded while you sleep.

I guess I'm saying that I would not expect this to remain this way
indefinitely. To be clear, I don't necessarily mean the current
TiVo device or company, I'm referring to a TiVo-like device that is
your personal video assistant. I'd like to think that the folks over
at TiVo be the one to leverage this sort of thing, but that's about
it. This could come from anywhere. Slingbox comes to mind as one
possibility.

> > I'm envisioning a scenario where we may find that there are a
> > few tens of thousands of PTA meetings each being uploaded
> > routinely onto the home PC's of whoever recorded the local
> > meeting, and then made available to the small number of
> > interested parties who might then watch, where (0<N<20).
>
> Any reason why YouTube can't do this today?

Primarily because I'm looking towards the future, and there are many
situations where YouTube isn't going to be the answer.

For example, consider the PTA meeting: I'm not sure if YouTube is going
to want to be dealing with maybe 10,000 videos that are each an hour or
two long which are watched by maybe a handful of people, at however
frequently your local PTA meetings get held. Becuase there's a lot of
PTA's. And the meetings can be long. Further, it's a perfect situation
where you're likely to be able to keep a portion of the traffic on-net
through geolocality effects.

Of course, I'm assuming some technology exists, possibly in the upcoming
fictional Microsoft Whoopta OS, that makes local publication and serving
of video easy to do. If there's a demand, we will probably see it.

> Remember the human
> element. People don't necessarily study the field of possibilities
> and them make the optimal choice.

That's the argument to discuss this now rather than later.

> Usually, they just pick what is
> familiar as long as it is good enough. Click onto a YouTube video,
> then click the pause button, then go cook supper. After you eat,
> go back and press the play button. To the end user, this is much
> the same experience as P2P, or programming a PVR to record an
> interesting program that broadcasts at an awkward time.

I would say that it is very much NOT the same experience as programming a
PVR. I watch exceedingly little video on the computer, for example. I
simply prefer the TV. And if more than one person's going to watch, it
*has* to be on the TV (at least here).

> > I don't pretend to have the answers
> > to this, but I do feel reasonably certain that the success of
> > YouTube is not a fluke, and that we're going to see more, not
> > less, of this sort of thing.
>
> Agreed.
>
> > > As far as I am concerned the killer application for IP multicast is
> > > *NOT* video, it's market data feeds from NYSE, NASDAQ, CBOT, etc.
> >
> > You can go compare the relative successes of Yahoo! Finance
> > and YouTube.
>
> Actually, Yahoo! Finance is only one single subscriber to these market
> data feeds. My company happens to run an IP network supporting global
> multicast, which delivers the above market data feeds, and many others,
> to over 10,000 customers in over 50 countries. Market data feeds are not
> a mass market consumer product but they are a realtime firehose of data
> that people want to receive right now and not a microsecond later. It is
> not unusual for our sales team to receive RFPs that specify latency
> times
> that are faster than the speed of light. The point is that IP multicast
> is probably the only way to deliver this data because we cannot afford
> the
> additional latency to send packets into a server and back again. I.e. a
> CDN
> type of solution won't work.
>
> It's not only nice to multicast this data, it is mission critical.
> People are risking millions of dollars every hour based on the data in
> these
> feeds. The way it usually works (pioneered by NYSE I believe) is that
> they send two copies of every packet through two separate multicast
> trees. If there is too much time differential between the arrival of the
> two
> packets then the service puts up a warning flag so that the traders know
> their data
> is stale. Add a few more milliseconds and it shuts down entirely because
> the data
> is now entirely useless. When latency is this important, those copies
> going
> to multiple subscribers have to be copied in the packet-forwarding
> device,
> i.e. router supporting IP multicast.
>
> Of course consumer video doesn't have the same strict latency
> requirements,
> therefore my opinion that IP multicast is unneeded complexity. Use the
> best
> tool for the job.

There are lots of things that multicast can be used for, and there's no
question that financial data could be useful that way. However, what I'm
saying is that this isn't particularly relevant on the public Internet in
a general way. The thing that's going to kill networks isn't the presence
or absence of the data you're talking about, because as a rule anybody who
needs data in the sort of fashion you're talking about is capable of buying
sufficient guaranteed network capacity to deal with it.

I could just as easily say that the killer application for IP multicast is
routing protocols such as OSPF, because that's probably just as relevant
(in a different way) as what you're talking about. But both are
distractions.

What I'm concerned about are things that are going to cause major networks
to have difficulties. Given this discussion, this almost certainly
requires you to involve circuits where oversubscription is a key component
in the product strategy. That probably means residential broadband
connections, which are responsible for a huge share of the global Internet's
traffic. My uninformed guess would be that there are more of those
broadband connections than there are attachments to your global multicast n
etwork. Maybe even by an order of magnitude. :-) ;-)

Multicast may or may not be the solution to the problem at hand, but from
a distribution point of view, multicast and intelligent caching share some
qualities that are desirable. To write off multicast as being at least a
potential part of the solution, just because the application is less
critical than your financial transactions, may be premature.

I see a lot of value in having content only arrive on-net once, and
multicast could be a way to help that happen.

The real problem is that neither your financial transactions nor any
meaningful amount of video are able to transit multicast across random
parts of the public Internet, which is a bit of a sticking point.

... JG
--
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.
Joe Abley
2008-04-22 17:45:19 UTC
Permalink
On 22 Apr 2008, at 12:47, Joe Greco wrote:

>> You mean a computer? Like the one that runs file-sharing
>> clients?
>
> Like the one that nobody really wants to watch large quantities of
> television on?

Perhaps more like the mac mini that's plugged into the big plasma
screen in the living room? Or one of the many stereo-component-styled
"media" PCs sold for the same purpose, perhaps even running Windows
MCE, a commercial operating system sold precisely because people want
to hook their computers up to televisions?

Or the old-school hacked XBox running XBMC, pulling video over SMB
from the PC in the other room?

Or the XBox 360 which can play media from the home-user NAS in the
back room? The one with the bittorrent client on it? :-)


Joe
Brandon Galbraith
2008-04-22 17:51:27 UTC
Permalink
On 4/22/08, Joe Abley <***@ca.afilias.info> wrote:
>
>
> On 22 Apr 2008, at 12:47, Joe Greco wrote:
>
> >> You mean a computer? Like the one that runs file-sharing
> >> clients?
> >
> > Like the one that nobody really wants to watch large quantities of
> > television on?
>
>
> Perhaps more like the mac mini that's plugged into the big plasma
> screen in the living room? Or one of the many stereo-component-styled
> "media" PCs sold for the same purpose, perhaps even running Windows
> MCE, a commercial operating system sold precisely because people want
> to hook their computers up to televisions?
>
> Or the old-school hacked XBox running XBMC, pulling video over SMB
> from the PC in the other room?
>
> Or the XBox 360 which can play media from the home-user NAS in the
> back room? The one with the bittorrent client on it? :-)


Don't forget the laptop or thin desktop hooked up to the 24-60 inch monitor
in the bedroom/living room to watch Netflix Watch It Now content (which
there is no limit on how much can be viewed by a customer).

-brandon
Williams, Marc
2008-04-22 18:14:04 UTC
Permalink
The OSCAR is the first H.264 encoder appliance designed by HaiVision
specifically for QuickTime environments. It natively supports
the RTSP streaming media protocol. The OSCAR can stream directly to
QuickTime supporting up to full D1 resolution (full standard
definition resolution or 720 x 480 NTSC / 576 PAL) at video bit rates up
to 1.5 Mbps. The OSCAR supports either multicast or unicast
RTSP sessions. With either, up to 10 separate destination streams can be
generated by a single OSCAR encoder (more at lower bit
rates). So, on a college campus for example, this simple, compact,
rugged appliance can be placed virtually anywhere and with a
simple network connection can stream video to any QuickTime client on
the local network or over the WAN. If more than 10
QuickTime clients need to view or access the video, the OSCAR can be
directed to a QuickTime Streaming Server which can typically
host well over 1000 clients

> -----Original Message-----
> From: Brandon Galbraith [mailto:***@gmail.com]
> Sent: Tuesday, April 22, 2008 1:51 PM
> To: Joe Abley
> Cc: ***@nanog.org; Joe Greco
> Subject: Re: [Nanog] ATT VP: Internet to hit capacity by 2010
>
> On 4/22/08, Joe Abley <***@ca.afilias.info> wrote:
> >
> >
> > On 22 Apr 2008, at 12:47, Joe Greco wrote:
> >
> > >> You mean a computer? Like the one that runs file-sharing clients?
> > >
> > > Like the one that nobody really wants to watch large
> quantities of
> > > television on?
> >
> >
> > Perhaps more like the mac mini that's plugged into the big plasma
> > screen in the living room? Or one of the many
> stereo-component-styled
> > "media" PCs sold for the same purpose, perhaps even running Windows
> > MCE, a commercial operating system sold precisely because
> people want
> > to hook their computers up to televisions?
> >
> > Or the old-school hacked XBox running XBMC, pulling video over SMB
> > from the PC in the other room?
> >
> > Or the XBox 360 which can play media from the home-user NAS in the
> > back room? The one with the bittorrent client on it? :-)
>
>
> Don't forget the laptop or thin desktop hooked up to the
> 24-60 inch monitor in the bedroom/living room to watch
> Netflix Watch It Now content (which there is no limit on how
> much can be viewed by a customer).
>
> -brandon
> _______________________________________________
> NANOG mailing list
> ***@nanog.org
> http://mailman.nanog.org/mailman/listinfo/nanog
>
Marc Manthey
2008-04-23 01:06:35 UTC
Permalink
> .......is the first H.264 encoder ...... designed by ....
> specifically for ....... environments. It natively supports
> the RTSP streaming media protocol. ........ can stream directly to
> .....

hi marc
so your " oskar" can rtsp multicast stream over ipv6 and quicktime
not , or was this just an ad ?

cheers

Marc


--
Les enfants teribbles - research and deployment
Marc Manthey - Hildeboldplatz 1a
D - 50672 Köln - Germany
Tel.:0049-221-3558032
Mobil:0049-1577-3329231
jabber :***@kgraff.net
blog : http://www.let.de
ipv6 http://www.ipsix.org

Klarmachen zum Ändern!
http://www.piratenpartei-koeln.de/
Williams, Marc
2008-04-23 14:08:36 UTC
Permalink
Just an ad used to illustrate the low cost and ease of use. The fact that it's quicktime also made me realize it's also ipods, iphones/wifi, and that Apple has web libraries ready for web site development on their darwin boxes. Also, I would imagine this device could easily be cross connected and multicasted into each access router so that the only bandwidth used is that bandwidth being paid for by customer or QoS unicast streams feeding an MCU. Rambling now, but happy to answer your question.



> -----Original Message-----
> From: Marc Manthey [mailto:***@let.de]
> Sent: Tuesday, April 22, 2008 9:07 PM
> To: ***@nanog.org
> Subject: Re: [Nanog] ATT VP: Internet to hit capacity by 2010
>
> > .......is the first H.264 encoder ...... designed by ....
> > specifically for ....... environments. It natively supports
> the RTSP
> > streaming media protocol. ........ can stream directly to .....
>
> hi marc
> so your " oskar" can rtsp multicast stream over ipv6 and
> quicktime not , or was this just an ad ?
>
> cheers
>
> Marc
>
>
> --
> Les enfants teribbles - research and deployment Marc Manthey
> - Hildeboldplatz 1a D - 50672 Köln - Germany
> Tel.:0049-221-3558032
> Mobil:0049-1577-3329231
> jabber :***@kgraff.net
> blog : http://www.let.de
> ipv6 http://www.ipsix.org
>
> Klarmachen zum Ändern!
> http://www.piratenpartei-koeln.de/
> _______________________________________________
> NANOG mailing list
> ***@nanog.org
> http://mailman.nanog.org/mailman/listinfo/nanog
>
Marshall Eubanks
2008-04-23 14:15:41 UTC
Permalink
Here is a spec sheet :

<http://goamt.radicalwebs.com/images/products/ds_OSCAR_1106.pdf>

Regards
Marshall

On Apr 23, 2008, at 10:08 AM, Williams, Marc wrote:

> Just an ad used to illustrate the low cost and ease of use. The
> fact that it's quicktime also made me realize it's also ipods,
> iphones/wifi, and that Apple has web libraries ready for web site
> development on their darwin boxes. Also, I would imagine this
> device could easily be cross connected and multicasted into each
> access router so that the only bandwidth used is that bandwidth
> being paid for by customer or QoS unicast streams feeding an MCU.
> Rambling now, but happy to answer your question.
>
>
>
>> -----Original Message-----
>> From: Marc Manthey [mailto:***@let.de]
>> Sent: Tuesday, April 22, 2008 9:07 PM
>> To: ***@nanog.org
>> Subject: Re: [Nanog] ATT VP: Internet to hit capacity by 2010
>>
>>> .......is the first H.264 encoder ...... designed by ....
>>> specifically for ....... environments. It natively supports
>> the RTSP
>>> streaming media protocol. ........ can stream directly to .....
>>
>> hi marc
>> so your " oskar" can rtsp multicast stream over ipv6 and
>> quicktime not , or was this just an ad ?
>>
>> cheers
>>
>> Marc
>>
>>
>> --
>> Les enfants teribbles - research and deployment Marc Manthey
>> - Hildeboldplatz 1a D - 50672 Köln - Germany
>> Tel.:0049-221-3558032
>> Mobil:0049-1577-3329231
>> jabber :***@kgraff.net
>> blog : http://www.let.de
>> ipv6 http://www.ipsix.org
>>
>> Klarmachen zum Ändern!
>> http://www.piratenpartei-koeln.de/
>> _______________________________________________
>> NANOG mailing list
>> ***@nanog.org
>> http://mailman.nanog.org/mailman/listinfo/nanog
>>
>
> _______________________________________________
> NANOG mailing list
> ***@nanog.org
> http://mailman.nanog.org/mailman/listinfo/nanog
Marc Manthey
2008-04-23 14:35:21 UTC
Permalink
Am 23.04.2008 um 16:08 schrieb Williams, Marc:

> Just an ad

hi marc....

cool . so i have 3 computers that does not do the job and i have not
much money can you send me one of those;) ?
Or cheapest_beta_tester_non_commercial_offer you can make . ?

I accept offlist conversation.

thanks and sorry for my ramblings

greetings from germany

Marc


>> -----Original Message-----
>> From: Marc Manthey [mailto:***@let.de]
>> Sent: Tuesday, April 22, 2008 9:07 PM
>> To: ***@nanog.org
>> Subject: Re: [Nanog] ATT VP: Internet to hit capacity by 2010
>>
>>> .......is the first H.264 encoder ...... designed by ....
>>> specifically for ....... environments. It natively supports
>> the RTSP
>>> streaming media protocol. ........ can stream directly to .....
>>
>> hi marc
>> so your " oskar" can rtsp multicast stream over ipv6 and
>> quicktime not , or was this just an ad ?
>>
>> cheers
>>
>> Marc
>>
>>
>> --
>> Les enfants teribbles - research and deployment Marc Manthey
>> - Hildeboldplatz 1a D - 50672 Köln - Germany
>> Tel.:0049-221-3558032
>> Mobil:0049-1577-3329231
>> jabber :***@kgraff.net
>> blog : http://www.let.de
>> ipv6 http://www.ipsix.org
>>
>> Klarmachen zum Ändern!
>> http://www.piratenpartei-koeln.de/
>> _______________________________________________
>> NANOG mailing list
>> ***@nanog.org
>> http://mailman.nanog.org/mailman/listinfo/nanog
>>
m***@bt.com
2008-04-22 19:44:38 UTC
Permalink
> > You mean a computer? Like the one that runs file-sharing clients?
>
> Like the one that nobody really wants to watch large
> quantities of television on? Especially now that it's pretty
> common to have large, flat screen TV's, and watching TV even
> on a 24" monitor feels like a throwback to the '80's?
>
> How about the one that's shaped like a TiVo and has a
> built-in remote control, sane operating software, can be
> readily purchased and set up by a non-techie, and is known to
> work well?

Maybe I have a warped sense of how normal people set up their
home networks but I do notice all kinds of network storage for
sale in local computer shops, and various multi-media player devices
that connect to a TV screen, network, etc. I can understand why
a TiVo collects content over the air, because it has TV receivers
built into it. My PVR does much the same thing. But when it comes
to collecting content from the Internet, it seems easier to just
let the file server do that job. Or run the nice easy software on
your home PC that allows you to search the web for torrents and
just click on the ones you want to download.

Let's face it, TiVo may have a lot of mindshare in that people
constantly talk about the thing as if it was some kind of magic,
but it hardly has the same kind of market share as the iPod.
The functions of that the TiVo carries out are software and
software is rather malleable. The functions of the various devices
can be mixed and matched in various ways. We can't predict which
combos will prevail, but we can make a pretty close guess as to
the functionality of the whole system.

> I remember all the fuss about how people would be making
> phone calls using VoIP and their computers. Yet most of the
> time, I see VoIP consumers transforming VoIP to legacy POTS,
> or VoIP hardphones, or stuff like that.

Cisco sells computers that look like a telephone set but have
and Ethernet jack out the back. Whether you use the Gizmoproject
software on a PC or one of these Cisco devices, you are still
making VoIP calls on a computer. The appearance of a telephone
is not terribly relevant. My mobile phone is a computer with
Python installed on it to run a Russian-English dictionary application
but it also includes a two-way radio transciever that is programmed
to talk to a local cell transciever and behave like a telephone.
But it is still a computer at heart.

Anyone remember when a switch was a switch and a router was a router?
Now both of them are backplanes with computers and port interfaces
attached.

> Wow. Okay. I'll just say, then, that such a position seems
> a bit naive, and I suspect that broadband networks are going
> to be crying about the sheer stresses on their networks, when
> moderate numbers of people begin to upload videos into their
> TiVo, which then share them with other TiVo's owned by their
> friends around town, or across an ocean, while also
> downloading a variety of shows from a dozen off-net sources, etc.

Where have you been!?
You have just described the P2P traffic that ISPs and other network
operators have been complaining about since the dawn of this century.
TiVo is just one of a thousand brand names for "home computer".

> > Yes. The overall trend has been to increasingly split the
> market into
> > smaller slivers with additional choices being added and older ones
> > still available.
>
> Yes, but that's still a broadcast model. We're talking about
> an evolution (potentially _r_evolution) of technology where
> the broadcast model itself is altered.

I would say that splitting the market for content into many
small slivers (a forest of shards) is pretty much a revolution.
Whatever technology is used to deliver this forest of shards is
irrelevant because the revolution is in the creation of this
information superhighway with thousands of channels. And even
though the concept predated the exponential growth of the Internet
let's not forget that the web has been there and done that.

> 2) Hard drives continue to grow, and the ability to store
> more, combined
> with higher bit rates (HD, less artifact, whatever) means that more
> bits can be transferred to fill the same amount of time

This is key. Any scenario that does not expect the end user to amass a
huge library of content for later viewing, is missing an important
component. And if that content library is encrypted or locked in some
way so that it is married to one brand name device, or pay-per-view
systems, then the majority of the market will pass it by.

> > and a more-or-less
> > fixed portion of people's disposable income. Based on this, I don't
> > expect to see any really huge changes.
>
> That's fair enough. That's optimistic (from a network
> operator's point of view.) I'm afraid that such changes will
> happen, however.

Bottom line is that our networks must be paid for. If consumers want to
use more of our financial investment (capital and opex) then we will be
forced to raise prices up to a level where it limits demand to what we
can actually deliver. Most networks can live with a step up in
consumption
if it levels off because although they may lose money at first, if
consumption
dips and levels then they can make it back over time. If the content
senders
do not want this dipping and levelling off, then they will have to foot
the
bill for the network capacity. And if they want to recover that cost
from the
end users then they will also run into that limit in the amount of money

people are able to spend on entertainment per month.

Broadcast models were built based on a delivery system that scaled up as
big as you want with only capex. But an IP network requires a lot of
opex
to maintain any level of capex investment. There ain't no free lunch.

> The problem with that is that there's money to be had, and if
> you let YouTube host your video, it's YouTube getting the
> juicy ad money.

The only difference from 1965 network TV is that in 1965, the networks
had limited sources capable of producing content at a reasonable cost.
But today, content production is cheap, and competition has driven the
cost of content down to zero. Only the middleman selling ads has a
business model any more. Network operators could fill that middleman
role but most of them are still stuck in the telco/ISP mindset.

> Well, that's the point I'm making. It isn't, and we're going
> to see SOMEONE look at this wonderful Internet thingy and see
> in it a way to "solve" this problem, which is going to turn
> into an operational nightmare as traffic loads increase, and
> a larger percentage of users start to either try to use the
> bandwidth they're being "sold," or actually demand it.

If this really happens, then some companies will fix their marketing
and sales contracts, others will go into Chapter 11. But at the end
of the day, as with the telecom collapse, the networks keep rolling
on even if the management changes.

> For example, consider the PTA meeting: I'm not sure if
> YouTube is going to want to be dealing with maybe 10,000
> videos that are each an hour or two long which are watched by
> maybe a handful of people, at however frequently your local
> PTA meetings get held. Becuase there's a lot of PTA's. And
> the meetings can be long. Further, it's a perfect situation
> where you're likely to be able to keep a portion of the
> traffic on-net through geolocality effects.

You're right. People are already building YouTube clones or
adding YouTube like video libraries to their websites. This
software combined with lots of small distributed data centers
like Amazon EC2, is likely where local content will go. Again
one wonders why Google and Amazon and Yahoo are inventing
this stuff rather than ISPs. Probably because after the wave
of acquisition by telcos, they neglected the data center half
of the ISP equation. In other words, there are historical
reasons based on ignorance, but no fundamental barrier to
large carriers offering something like Hadoop, EC2, AppEngine.

> I would say that it is very much NOT the same experience as
> programming a PVR. I watch exceedingly little video on the
> computer, for example. I simply prefer the TV.

Maybe PVR doesn't mean the same stateside as here in the UK.
My PVR is a box with two digital TV receivers and 180 gig
hard drive that connects to a TV screen. All interaction is
through the remote and the TV. The difference between this
and P2P video is only the software and the screen we watch it on.
By the way, my 17-month old loves YouTube videos. There may
be a generational thing coming down the road similar to the
way young people have ditched email in favour of IM.

> There are lots of things that multicast can be used for, and
> there's no question that financial data could be useful that
> way. However, what I'm saying is that this isn't
> particularly relevant on the public Internet in a general
> way.

If it were not for these market data feeds, I doubt that
IP multicast would be as widely supported by routers.

> The real problem is that neither your financial transactions
> nor any meaningful amount of video are able to transit
> multicast across random parts of the public Internet, which
> is a bit of a sticking point.

Then there is P2MP (Point to Multi-Point) MPLS...

--Michael Dillon
Joe Greco
2008-04-22 21:08:05 UTC
Permalink
> On 22 Apr 2008, at 12:47, Joe Greco wrote:
> >> You mean a computer? Like the one that runs file-sharing
> >> clients?
> >
> > Like the one that nobody really wants to watch large quantities of
> > television on?
>
> Perhaps more like the mac mini that's plugged into the big plasma
> screen in the living room? Or one of the many stereo-component-styled
> "media" PCs sold for the same purpose, perhaps even running Windows
> MCE, a commercial operating system sold precisely because people want
> to hook their computers up to televisions?
>
> Or the old-school hacked XBox running XBMC, pulling video over SMB
> from the PC in the other room?
>
> Or the XBox 360 which can play media from the home-user NAS in the
> back room? The one with the bittorrent client on it? :-)

Pretty much. People have a fairly clear bias against watching anything
on your conventional PC. This probably has something to do with the way
the display ergonomics work; my best guess is that most people have their
PC's set up in a corner with a chair and a screen suitable for work at a
distance of a few feet. As a result, there's usually a clear delineation
between devices that are used as general purpose computers, and devices
that are used as specialized media display devices.

The "Mac Mini" may be an example of a device that can be used either way,
but do you know of many people that use it as a computer (and do all their
normal computing tasks) while it's hooked up to a large TV? Even Apple
acknowledged the legitimacy of this market by releasing AppleTV.

People generally do not want to hook their _computer_ up to televisions,
but rather they want to hook _a_ computer up to television so that they're
able to do things with their TV that an off-the-shelf product won't do for
them. That's an important distinction, and all of the examples you've
provided seem to be examples of the latter, rather than the former, which
is what I was talking about originally.

If you want to discuss the latter, then we've got to include a large field
of other devices, ironically including the TiVo, which are actually
programmable computers that have been designed for specific media tasks,
and are theoretically reprogrammable to support a wide variety of
interesting possibilities, and there we have the entry into the avalanche
of troubling operational issues that could result from someone releasing
software that distributes large amounts of content over the Internet, and
... oh, my bad, that brings us back to what we were talking about.

... JG
--
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.
Joe Greco
2008-04-22 23:55:08 UTC
Permalink
> > > You mean a computer? Like the one that runs file-sharing clients?
> >
> > Like the one that nobody really wants to watch large
> > quantities of television on? Especially now that it's pretty
> > common to have large, flat screen TV's, and watching TV even
> > on a 24" monitor feels like a throwback to the '80's?
> >
> > How about the one that's shaped like a TiVo and has a
> > built-in remote control, sane operating software, can be
> > readily purchased and set up by a non-techie, and is known to
> > work well?
>
> Maybe I have a warped sense of how normal people set up their
> home networks but I do notice all kinds of network storage for
> sale in local computer shops, and various multi-media player devices
> that connect to a TV screen, network, etc.

Yes, but there's no real standard. It's mostly hodgepodge based
solutions that allow techie types to cobble together some random
collection of hardware to resolve some particular subset of problems.
What the public wants, though, is for someone to solve this problem
and build it for them.

As an example, consider that it's a lot more popular for home users
to source their DVR from their cable company than it is for them to
get a CableCARD receiver card for their PC and try to roll a MythTV
box for themselves.

> I can understand why
> a TiVo collects content over the air, because it has TV receivers
> built into it. My PVR does much the same thing. But when it comes
> to collecting content from the Internet, it seems easier to just
> let the file server do that job. Or run the nice easy software on
> your home PC that allows you to search the web for torrents and
> just click on the ones you want to download.
>
> Let's face it, TiVo may have a lot of mindshare in that people
> constantly talk about the thing as if it was some kind of magic,
> but it hardly has the same kind of market share as the iPod.
> The functions of that the TiVo carries out are software and
> software is rather malleable. The functions of the various devices
> can be mixed and matched in various ways. We can't predict which
> combos will prevail, but we can make a pretty close guess as to
> the functionality of the whole system.

The magic of TiVo isn't that it records video. The magic bit is more
abstract, and it is that someone made a device that actually does what
the average consumer _wants_, rather than simply acting as a generic
DVR.

You actually said it yourself above, "it just seems easier" - but then
you got sidetracked by the loveliness of your PC. The magic of a TiVo-
like device is that end users perceive it as easier. The solution that
doesn't involve them learning what torrents are, or filesharing is, or
having to figure out how to hook a computer up to a TV is, because some
TiVo-like device took it and internalized all of that and SOLVED the
problem, and solved it not only for them but a million other TV viewers
at the same time, that's the solution that's going to be truly
successful.

Not your homegrown DVR.

> > I remember all the fuss about how people would be making
> > phone calls using VoIP and their computers. Yet most of the
> > time, I see VoIP consumers transforming VoIP to legacy POTS,
> > or VoIP hardphones, or stuff like that.
>
> Cisco sells computers that look like a telephone set but have
> and Ethernet jack out the back. Whether you use the Gizmoproject
> software on a PC or one of these Cisco devices, you are still
> making VoIP calls on a computer. The appearance of a telephone
> is not terribly relevant. My mobile phone is a computer with
> Python installed on it to run a Russian-English dictionary application
> but it also includes a two-way radio transciever that is programmed
> to talk to a local cell transciever and behave like a telephone.
> But it is still a computer at heart.

The hell it is. It's still fundamentally a phone. That you can reprogram
it to do other things is technologically interesting to a small number of
geeks, but were you to ask the average person "what is this," they'd still
see it as a phone, and see its primary job as making phone calls.

Further, that does not even begin to argue against what I was saying,
which is that most people are NOT making phone calls using VoIP from
their computers.

> Anyone remember when a switch was a switch and a router was a router?
> Now both of them are backplanes with computers and port interfaces
> attached.

Yes. There's a certain amount of sense to that, at least once you needed
to be able to process things at wirespeed.

> > Wow. Okay. I'll just say, then, that such a position seems
> > a bit naive, and I suspect that broadband networks are going
> > to be crying about the sheer stresses on their networks, when
> > moderate numbers of people begin to upload videos into their
> > TiVo, which then share them with other TiVo's owned by their
> > friends around town, or across an ocean, while also
> > downloading a variety of shows from a dozen off-net sources, etc.
>
> Where have you been!?

I've been right here, serving high bandwidth content for many years.

> You have just described the P2P traffic that ISPs and other network
> operators have been complaining about since the dawn of this century.

No. I've just described something much worse, because there is the
potential for so much more volume. TiVo implies that the device can do
speculative fetch, not just the on-demand sort of things most current
P2P networks do.

> TiVo is just one of a thousand brand names for "home computer".

If you want to define "home computer" that way. Personally, while my
light switches contain microprocessors, and may be reprogrammable, that
does not mean that I view them as computers. I don't think I can run
X11 on my light switch (even though it's got several LED's). I don't
think that it's a good idea to try to run FreeBSD on my security system.
I don't think that I'll be able to run OpenOffice on my Cisco 7960G's.
I'm pretty sure that my thermostat isn't good for running Mahjongg. And
the TiVo probably isn't going to run Internet Explorer anytime soon.

There are microprocessors all over the place. Possessing a microprocessor,
and even being able to affect the programming that runs on a uP, doesn't
make every such device a home computer.

One of these days, we're going to wake up and discover that someone (and I
guess it's got to be someone more persuasive than Apple with their AppleTV
doodad) is going to create some device that is compelling to users. I do
not care that it has a microprocessor inside, or even that it may be
programmable. The thing is likely to be a variation on a set-top box, is
likely to have TiVo-like capabilities, and I'm worried about what's going
to happen to IP networks.

> > > Yes. The overall trend has been to increasingly split the
> > market into
> > > smaller slivers with additional choices being added and older ones
> > > still available.
> >
> > Yes, but that's still a broadcast model. We're talking about
> > an evolution (potentially _r_evolution) of technology where
> > the broadcast model itself is altered.
>
> I would say that splitting the market for content into many
> small slivers (a forest of shards) is pretty much a revolution.

Agreed :-) I'm not sure it'll happen all at once, though.

> Whatever technology is used to deliver this forest of shards is
> irrelevant because the revolution is in the creation of this
> information superhighway with thousands of channels. And even
> though the concept predated the exponential growth of the Internet
> let's not forget that the web has been there and done that.

Ok, I'll accept that. Except I'd like to note that the technology that
I have seen that could enable this is probably the Internet; most other
methods of transmission are substantially more restricted (i.e. it's
pretty difficult for me to go and get a satellite uplink, but pretty much
even the most lowly DSL customer probably has a 384k upstream).

> > 2) Hard drives continue to grow, and the ability to store
> > more, combined
> > with higher bit rates (HD, less artifact, whatever) means that more
> > bits can be transferred to fill the same amount of time
>
> This is key. Any scenario that does not expect the end user to amass a
> huge library of content for later viewing, is missing an important
> component. And if that content library is encrypted or locked in some
> way so that it is married to one brand name device, or pay-per-view
> systems, then the majority of the market will pass it by.

I ABSOLUTELY AGREE... that I wish the world worked that way. ( :-) )

> > > and a more-or-less
> > > fixed portion of people's disposable income. Based on this, I don't
> > > expect to see any really huge changes.
> >
> > That's fair enough. That's optimistic (from a network
> > operator's point of view.) I'm afraid that such changes will
> > happen, however.
>
> Bottom line is that our networks must be paid for. If consumers want to
> use more of our financial investment (capital and opex) then we will be
> forced to raise prices up to a level where it limits demand to what we
> can actually deliver. Most networks can live with a step up in
> consumption
> if it levels off because although they may lose money at first, if
> consumption
> dips and levels then they can make it back over time. If the content
> senders
> do not want this dipping and levelling off, then they will have to foot
> the
> bill for the network capacity.

That's kind of the funniest thing I've seen today, it sounds so much like
an Ed Whitacre. I've somewhat deliberately avoided the model of having
some large-channel-like "content senders" enter this discussion, because
I am guessing that there will be a large number of people who may simply
use their existing - paid for - broadband connections. That's the PTA
example and probably the "Star Trek: Hidden Frontier" example, and then
for good measure, throw in everyone who will be self-publishing the
content that (looking back on today) used to get served on YouTube. Then
Ed learns that the people he'd like to charge for the privilege of using
"his" pipes are already paying for pipes.

> And if they want to recover that cost from the
> end users then they will also run into that limit in the amount of money
> people are able to spend on entertainment per month.
>
> Broadcast models were built based on a delivery system that scaled up as
> big as you want with only capex. But an IP network requires a lot of
> opex
> to maintain any level of capex investment. There ain't no free lunch.

I certainly agree, that's why this discussion is relevant.

> > The problem with that is that there's money to be had, and if
> > you let YouTube host your video, it's YouTube getting the
> > juicy ad money.
>
> The only difference from 1965 network TV is that in 1965, the networks
> had limited sources capable of producing content at a reasonable cost.
> But today, content production is cheap, and competition has driven the
> cost of content down to zero.

Right, that's a "problem" I'm seeing too.

> Only the middleman selling ads has a
> business model any more. Network operators could fill that middleman
> role but most of them are still stuck in the telco/ISP mindset.

So, consider what would happen if that were to be something that you could
self-manage, outsourcing the hard work to an advertising provider. Call
it maybe Google AdVideos. :-) Host the video on your TiVo, or your PC,
and take advantage of your existing bandwidth. (There are obvious non-
self-hosted models already available, I'm not focusing on them, but they
would work too)

> > Well, that's the point I'm making. It isn't, and we're going
> > to see SOMEONE look at this wonderful Internet thingy and see
> > in it a way to "solve" this problem, which is going to turn
> > into an operational nightmare as traffic loads increase, and
> > a larger percentage of users start to either try to use the
> > bandwidth they're being "sold," or actually demand it.
>
> If this really happens, then some companies will fix their marketing
> and sales contracts, others will go into Chapter 11. But at the end
> of the day, as with the telecom collapse, the networks keep rolling
> on even if the management changes.

I would think that has some operational aspects that are worth talking
about.

> > For example, consider the PTA meeting: I'm not sure if
> > YouTube is going to want to be dealing with maybe 10,000
> > videos that are each an hour or two long which are watched by
> > maybe a handful of people, at however frequently your local
> > PTA meetings get held. Becuase there's a lot of PTA's. And
> > the meetings can be long. Further, it's a perfect situation
> > where you're likely to be able to keep a portion of the
> > traffic on-net through geolocality effects.
>
> You're right. People are already building YouTube clones or
> adding YouTube like video libraries to their websites. This
> software combined with lots of small distributed data centers
> like Amazon EC2, is likely where local content will go. Again
> one wonders why Google and Amazon and Yahoo are inventing
> this stuff rather than ISPs. Probably because after the wave
> of acquisition by telcos, they neglected the data center half
> of the ISP equation. In other words, there are historical
> reasons based on ignorance, but no fundamental barrier to
> large carriers offering something like Hadoop, EC2, AppEngine.

That's true, but it's also quite possible that we'll see it decentralize
further. Why should I pay someone to host content if I could just share
it from my PC... I'm not saying that I _want_ Microsoft to wake up and
realize that it has a path to strike at some portions of Google, et.al,
by changing the very nature of Internet content distribution, but it's a
significant possibility. That P2P networks work as well as they do says
gobs about the potential.

> > I would say that it is very much NOT the same experience as
> > programming a PVR. I watch exceedingly little video on the
> > computer, for example. I simply prefer the TV.
>
> Maybe PVR doesn't mean the same stateside as here in the UK.
> My PVR is a box with two digital TV receivers and 180 gig
> hard drive that connects to a TV screen. All interaction is
> through the remote and the TV.

Then it's part of your TV system, not really a personal computer.

> The difference between this
> and P2P video is only the software and the screen we watch it on.
> By the way, my 17-month old loves YouTube videos. There may
> be a generational thing coming down the road similar to the
> way young people have ditched email in favour of IM.

That's possible, but there are still some display ergonomics issues
with watching things on a computer. AppleTV is perfectly capable of
downloading YouTube and displaying it on a TV; this is not at issue.
iPhones are _also_ capable of it, but that does not mean that you are
going to want to watch hour-long TV shows on your iPhone with the rest
of your family... that's where having a large TV set, surrounded by
some furniture that people can relax on comes in.

In any case, the point is still that I think there will be a serious
problem if and when someone comes up with a TiVo-like device that
implements what I like to refer to as InterneTiVo. That all the
necessary technology to implement this is available TODAY is completely
irrelevant; it is going to take someone taking all the technical bits,
figuring out how to glue it all together in a usable way, package it
up to hide the gory details, and then sell it as a set-top box for
$cheap in the same way that TiVo did. When TiVo did that, not only
did they make "DVR" a practical reality for the average consumer, but
they also actually managed to succeed at a more abstract level - the
device they designed wasn't just capable of recording Channel 22 from
8:00PM to 9:00PM every Wednesday night, but was actually capable of
analyzing the broadcast schedule, picking up shows at whatever time
they were available, rescheduling around conflicts, and even looking
for things that were similar, that a user might like. A TiVo isn't
a "DVR" (in the sense of the relatively poor capabilities of most of
the devices that bear that tag) so much as it is a personal video
assistant.

So what I'm thinking of is a device that is doing the equivalent of
being a "personal video assistant" on the Internet. And I believe it
is coming. Something that's capable of searching out and speculatively
downloading the things it thinks you might be interested in. Not some
techie's cobbled together PC with BitTorrent and HDMI outputs. An
actual set-top box that the average user can use.

> > There are lots of things that multicast can be used for, and
> > there's no question that financial data could be useful that
> > way. However, what I'm saying is that this isn't
> > particularly relevant on the public Internet in a general
> > way.
>
> If it were not for these market data feeds, I doubt that
> IP multicast would be as widely supported by routers.

If it weren't for the internet, I doubt that IP would be as widely
supported by routers. :-) Something always drives technology.

The hardware specifics of this is getting a bit off-topic, at least
for this list. Do we agree that there's a potential model in the
future where video may be speculatively fetched off the Internet and
then stored for possible viewing, and if so, can we refocus a bit on
that?

... JG
--
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.
m***@bt.com
2008-04-23 09:39:33 UTC
Permalink
> > If the content senders do not want this dipping and levelling
> > off, then they will have to foot the bill for the network capacity.
>
> That's kind of the funniest thing I've seen today, it sounds
> so much like an Ed Whitacre.

> Then Ed learns that
> the people he'd like to charge for the privilege of using
> "his" pipes are already paying for pipes.

If they really were paying for pipes, there would be no issue.
The reason there is an issue is because network operators have
been assuming that consumers, and content senders, would not use
100% of the access link capacity through the ISP's core network.
When you assume any kind of overbooking then you are taking the
risk that you have underpriced the service. The ideas people are
talking about, relating to pumping lots of video to every end user,
are fundamentally at odds with this overbooking model. The risk
level has change from one in 10,000 to one in ten or one in five.

> > But today, content production is cheap, and competition has
> driven the
> > cost of content down to zero.
>
> Right, that's a "problem" I'm seeing too.

Unfortunately, the content owners still think that content is
king and that they are sitting on a gold mine. They fail to see
that they are only raking in revenues because they spend an awful
lot of money on marketing their content. And the market is now
so diverse (YouTube, indie bands, immigrant communities) that
nobody can get anywhere close to 100% share. The long tail seems
to be getting a bigger share of the overall market.

> Host the video on your TiVo, or your PC, and take advantage
> of your existing bandwidth. (There are obvious non-
> self-hosted models already available, I'm not focusing on
> them, but they would work too)

Not a bad idea if the asymmetry in ADSL is not too small. But
this all goes away if we really do get the kind of distributed
data centers that I envision, where most business premises convert
their machine rooms into generic compute/storage arrays.
I should point out that the enterprise world is moving this way,
not just Google/Amazon/Yahoo. For instance, many companies are moving
applications onto virtual machines that are hosted on relatively
generic compute arrays, with storage all in SANs. VMWare has a big
chunk of this market but XEN based solutions with their ability to
migrate running virtual machines, are also in use. And since a lot
of enterprise software is built with Java, clustering software like
Terracotta makes it possible to build a compute array with several
JVM's per core and scale applications with a lot less fuss than
traditional cluster operating systems.

Since most ISPs are now owned by telcos and since most telcos have
lots of strategically located buildings with empty space caused by
physical shrinkage of switching equipment, you would think that
everybody on this list would be thinking about how to integrate all
these data center pods into their networks.

> So what I'm thinking of is a device that is doing the
> equivalent of being a "personal video assistant" on the
> Internet. And I believe it is coming. Something that's
> capable of searching out and speculatively downloading the
> things it thinks you might be interested in. Not some
> techie's cobbled together PC with BitTorrent and HDMI
> outputs.

Speculative downloading is the key here, and I believe that
cobbled together boxes will end up doing the same thing.
However, this means that any given content file will be
going to a much larger number of endpoints, which is something
that P2P handles quite well. P2P software is a form of multicast
as is a CDN (Content Delivery Network) like Akamai. Just because
IP Multicast is built into the routers, does not make it the
best way to multicast content. Given that widespread IP multicast
will *NOT* happen without ISP investment and that it potentially
impacts every router in the network, I think it has a disadvantage
compared with P2P or systems which rely on a few middleboxes
strategically places, such as caching proxies.

> The hardware specifics of this is getting a bit off-topic, at
> least for this list. Do we agree that there's a potential
> model in the future where video may be speculatively fetched
> off the Internet and then stored for possible viewing, and if
> so, can we refocus a bit on that?

I can only see this speculative fetching if it is properly implemented
to minimize its impact on the network. The idea of millions of unicast
streams or FTP downloads in one big exaflood, will kill speculative
fetching. If the content senders create an exaflood, then the audience
will not get the kind of experience that they expect, and will go
elsewhere.

We had this experience recently in the UK when they opened a new
terminal
at Heathrow airport and British-Airways moved operations to T5
overnight.
The exaflood of luggage was too much for the system, and it has taken
weeks
to get to a level of service that people still consider "bad service"
but
bearable. They had so much misplaced luggage that they sent many
truckloads
of it to Italy to be sorted and returned to the owners. One of my
colleagues
claims that the only reason the terminal is now half-way functional is
that
many travellers are afraid to take any luggage at all except for
carry-on.
So far two executives of the airline have been sacked and the government

is being lobbied to break the airport operator monopoly so that at least
one of London's two major airports is run by a different company.

The point is that only the most stupid braindead content provider
executive
would unleash something like that upon their company by creating an
exaflood.
Personally I think the optimal solution is for a form of P2P that is
based
on published standards, with open source implementations, and relies on
a topology guru inside each ISP's network to inject traffic policy
information
into the system.

--Michael Dillon
Loading...