Discussion:
would ip6 help us safeing energy ?
(too old to reply)
Marc Manthey
2008-04-26 17:39:28 UTC
Permalink
hello

i have a question :

" IF we would use multicast" streaming ONLY, for appropriet
content , would `nt this " decrease " the overall internet traffic ?

Isn´t this an argument for ip6 / greenip6 ;) aswell ?


just my 2 cents

marc


--
Les enfants teribbles - research and deployment
Marc Manthey - Hildeboldplatz 1a
D - 50672 Köln - Germany
Tel.:0049-221-3558032
Mobil:0049-1577-3329231
jabber :***@kgraff.net
blog : http://www.let.de
ipv6 http://www.ipsix.org
Adrian Chadd
2008-04-26 17:57:04 UTC
Permalink
Post by Marc Manthey
hello
" IF we would use multicast" streaming ONLY, for appropriet
content , would `nt this " decrease " the overall internet traffic ?
Isn?t this an argument for ip6 / greenip6 ;) aswell ?
Some people make more money shipping more bits. They may not have
any motivation or desire to decrease traffic.



Adrian
Marc Manthey
2008-04-26 18:03:54 UTC
Permalink
Post by Adrian Chadd
Post by Marc Manthey
hello
" IF we would use multicast" streaming ONLY, for appropriet
content , would `nt this " decrease " the overall internet
traffic ?
Isn?t this an argument for ip6 / greenip6 ;) aswell ?
Some people make more money shipping more bits. They may not have
any motivation or desire to decrease traffic.
hello adrian, yes i know

but i would like to know if there is some material / links, case studys
or papers / statistics around to visualise it, for a presentation
that i am planning todo.

greetings

Marc
Antonio Querubin
2008-04-26 18:42:32 UTC
Permalink
Post by Marc Manthey
" IF we would use multicast" streaming ONLY, for appropriet
content , would `nt this " decrease " the overall internet traffic ?
On one hand, the amount of content that is 'live' or 'continuous' and
suitable for multicast streaming isn't s large percentage of overall
internet traffic to begin with. So the effect of moving most live content
to multicast on the Internet would have little overall effect.

However, for some live content where the audience is either very large or
concentrated on various networks, moving to multicast certainly has
significant advantages in reducing traffic on the networks closest to the
source or where the viewer concentration is high (particularly where the
viewer numbers infrequently spikes significantly higher than the average).

But network providers make their money in part by selling bandwidth. The
folks who would need to push for multicast are the live/perishable content
providers as they're the ones who'd benefit the most. But if bandwidth is
cheap they're not really gonna care.
Post by Marc Manthey
IsnŽt this an argument for ip6 / greenip6 ;) aswell ?
It's an argument for decreasing traffic and improving network efficiency
and scalability to handle 'flash crowd events'. IPv6 has nothing to do
with it.

Antonio Querubin
whois: AQ7-ARIN
Marc Manthey
2008-04-26 19:03:59 UTC
Permalink
Post by Antonio Querubin
Post by Marc Manthey
" IF we would use multicast" streaming ONLY, for appropriet
content , would `nt this " decrease " the overall internet
traffic ?
On one hand, the amount of content that is 'live' or 'continuous'
and suitable for multicast streaming isn't s large percentage of
overall internet traffic to begin with. So the effect of moving
most live content to multicast on the Internet would have little
overall effect.
right, i am aware of that and i was ment as an hypothetically rant ;)
Post by Antonio Querubin
However, for some live content where the audience is either very
large or concentrated on various networks, moving to multicast
certainly has significant advantages in reducing traffic on the
networks closest to the source or where the viewer concentration is
high (particularly where the viewer numbers infrequently spikes
significantly higher than the average).
i am not a math genious and i am talking about for example serving

10.000 unicast streams and
10.000 multicast streams

would the multicast streams more efficient or lets say , would you
need more machines to server 10.000 unicast streams ?
Post by Antonio Querubin
But network providers make their money in part by selling
bandwidth. The folks who would need to push for multicast are the
live/perishable content providers as they're the ones who'd benefit
the most. But if bandwidth is cheap they're not really gonna care.
well , cheap is relative , i bet its cheap where google hosts the
NOCs , but its not cheap in brasil , argentinia or indonesia.
Post by Antonio Querubin
Post by Marc Manthey
Isn´t this an argument for ip6 / greenip6 ;) aswell ?
It's an argument for decreasing traffic and improving network
efficiency and scalability to handle 'flash crowd events'. IPv6 has
nothing to do with it.
thanks for your opinion.

Marc
Post by Antonio Querubin
Antonio Querubin
whois: AQ7-ARIN
Antonio Querubin
2008-04-27 21:35:10 UTC
Permalink
Post by Marc Manthey
i am not a math genious and i am talking about for example serving
10.000 unicast streams and
10.000 multicast streams
would the multicast streams more efficient or lets say , would you
need more machines to server 10.000 unicast streams ?
For 10000 concurrent unicast streams you'd need not just more servers.
You'd need a significantly different network infrastructure than something
that would have to handle only a single multicast stream.

But supporting multicast isn't without it's own problems either. Even the
destination networks would have to consider implementing IGMP and/or MLD
snooping in their layer 2 devices to obtain maximum benefit from
multicast.

Antonio Querubin
whois: AQ7-ARIN
Marc Manthey
2008-04-27 21:50:09 UTC
Permalink
Post by Antonio Querubin
Post by Marc Manthey
i am not a math genious and i am talking about for example serving
10.000 unicast streams and
10.000 multicast streams
would the multicast streams more efficient or lets say , would you
need more machines to server 10.000 unicast streams ?
hello all ,
Post by Antonio Querubin
For 10000 concurrent unicast streams you'd need not just more servers.
thanks for the partizipation on this topic , i was "theoreticly "
speaking and this was actually what i wanted to hear ;)
Post by Antonio Querubin
You'd need a significantly different network infrastructure than
something that would have to handle only a single multicast stream.
But supporting multicast isn't without it's own problems either.
Even the destination networks would have to consider implementing
IGMP and/or MLD snooping in their layer 2 devices to obtain maximum
benefit from multicast.
i was reading some papers about multicast activity on 9/11 and it was
interesting to read that it just worked even when most
of the "big player " sites went offline, so this gives me another
approach for emergency scenarios.


<http://www.nanog.org/mtg-0110/ppt/eubanks.ppt>

<http://multicast.internet2.edu/workshops/illinois/internet2-multicast-workshop-31-july-2-august-2006-1-overview.ppt
Post by Antonio Querubin
Akamai has built a Content Delivery Network (CDN) because they do not
have to rely on any specific ISP or any specific IP network
functionality.
If you go with IP Multicast, or MPLS P2MP(Point to MultiPoint) then
you
are limited to only using ISPs who have implemented the right
protocols
and who peer using those protocols.
so this is similar to a "wallet garden " and not what we really want ,
but i was clear about that this is actually the only idea to implement
a "new" technologie into an existing infrastructure.


regards and sorry for beeing a bit offtopic

Marc

<www.lettv.de>
Post by Antonio Querubin
Antonio Querubin
whois: AQ7-ARIN
Joel Jaeggli
2008-04-27 22:44:48 UTC
Permalink
Post by Marc Manthey
Post by Antonio Querubin
Post by Marc Manthey
i am not a math genious and i am talking about for example serving
10.000 unicast streams and
10.000 multicast streams
would the multicast streams more efficient or lets say , would you
need more machines to server 10.000 unicast streams ?
hello all ,
Post by Antonio Querubin
For 10000 concurrent unicast streams you'd need not just more servers.
thanks for the partizipation on this topic , i was "theoreticly "
speaking and this was actually what i wanted to hear ;)
Your delivery needs to be sized against demand. 12 years ago when I
started playing around with streaming on a university campus boxes like
the following were science fiction:

http://www.sun.com/servers/networking/streamingsystem/specs.xml#anchor4

As for that matter were n x 10Gb/s ethernet trunks.

To make this scale in either dimension, audience or bandwidth, the
interests of the service providers and the content creators need to be
aligned. Traditionally this has been something of a challenge for
multicast deployments. Not that it hasn't happened but it's not an
automatic win either.
Post by Marc Manthey
Post by Antonio Querubin
You'd need a significantly different network infrastructure than
something that would have to handle only a single multicast stream.
But supporting multicast isn't without it's own problems either.
Even the destination networks would have to consider implementing
IGMP and/or MLD snooping in their layer 2 devices to obtain maximum
benefit from multicast.
i was reading some papers about multicast activity on 9/11 and it was
interesting to read that it just worked even when most
of the "big player " sites went offline, so this gives me another
approach for emergency scenarios.
The big player new sites were not take offline due to network capacity
issues but rather because their dynamic content delivery platforms
couldn't cope with the flash crowds...

Once they got rid of the dynamically generated content (per viewer page
rendering, advertising) they were back.
Post by Marc Manthey
<http://www.nanog.org/mtg-0110/ppt/eubanks.ppt>
<http://multicast.internet2.edu/workshops/illinois/internet2-multicast-workshop-31-july-2-august-2006-1-overview.ppt
Post by Antonio Querubin
Akamai has built a Content Delivery Network (CDN) because they do not
have to rely on any specific ISP or any specific IP network
functionality.
If you go with IP Multicast, or MPLS P2MP(Point to MultiPoint) then
you
are limited to only using ISPs who have implemented the right
protocols
and who peer using those protocols.
so this is similar to a "wallet garden " and not what we really want ,
but i was clear about that this is actually the only idea to implement
a "new" technologie into an existing infrastructure.
A maturing internet platform my be quite successful at resisting
attempts to change it. It's entirely possible for example that evolving
the mbone would have been more successful than "going native". The mbone
was in many respects a proto p2p overlay just as ip was a overlay on the
circuit-switched pstn.

That's all behind us however, and the approach that we should drop all
the unicast streaming or p2p in favor of multicast transport because
it's greener or lighter weight is just so much tilting at windmills,
something I've done altogether to much of.

Use the tool where it makes sense and can be delivered in a timely fashion.
Post by Marc Manthey
regards and sorry for beeing a bit offtopic
Marc
<www.lettv.de>
Post by Antonio Querubin
Antonio Querubin
whois: AQ7-ARIN
_______________________________________________
NANOG mailing list
http://mailman.nanog.org/mailman/listinfo/nanog
Dale Carstensen
2008-04-28 13:01:52 UTC
Permalink
I became aware of something called espn360 last fall. I just did a
google search so I could provide a URL, but one of the top search
responses was a Aug 9, 2007 posting saying "ESPN360 Dies an
Unneccessary Death: A Lesson in Network Neutrality ..." I don't
think it's dead, though, and maybe if you don't know about it, you
can do your own google search.

I think Disney/ABC thinks they can get individual ISPs to pay them
to carry sports audio/video streams. I suppose that would be yet
another multicast stream method, assuming an ISP location had multiple
customers viewing the same stream.

Are other content providers trying to do something similar? How are
operators dealing with this? What opinions are there in the operator
community?

Mr. Dale
Jim Popovitch
2008-04-28 17:44:23 UTC
Permalink
Post by Dale Carstensen
I think Disney/ABC thinks they can get individual ISPs to pay them
to carry sports audio/video streams. I suppose that would be yet
another multicast stream method, assuming an ISP location had multiple
customers viewing the same stream.
Are other content providers trying to do something similar? How are
operators dealing with this? What opinions are there in the operator
community?
I'm not sure of the particulars, but Hulu (NBC/Universal and News
Corp) and FanCast (Comcast) seem to have an interesting relationship.
I would love to know more, but i detest reading financials. ;-)

-Jim P.
Frank Bulk - iNAME
2008-04-28 20:26:55 UTC
Permalink
Dale:

ESPN360 used to be something that internet subscribers paid for themselves,
but now it's something that ISPs (most interesting to those who are also
video providers) can offer.

If you google around you can find a pretty good Wikipedia page on ESPN360.

I looked into this for our operations because we do both (internet and
video). The price was reasonable and you only pay on the number of internet
subs that meet their minimum performance standards. Since 50% of our user
base is at 128/128 kbps, that's a lot of subscribers we didn't need to pay
for. In the end, I didn't get buy-in from the rest of the management team
into adding this. I think they perceived (and probably correctly so) that
too few of our users would actually *use* it. If I could get even 2% of our
customer base seriously interested I think we would move on this.

BTW, there's no multicast (at lease from Disney/ABC directly) involved.
It's just another unicast video stream like YouTube.

Frank

-----Original Message-----
From: Dale Carstensen [mailto:***@lampinc.com]
Sent: Monday, April 28, 2008 8:02 AM
To: ***@nanog.org
Subject: Re: [NANOG] would ip6 help us safeing energy ?

I became aware of something called espn360 last fall. I just did a
google search so I could provide a URL, but one of the top search
responses was a Aug 9, 2007 posting saying "ESPN360 Dies an
Unneccessary Death: A Lesson in Network Neutrality ..." I don't
think it's dead, though, and maybe if you don't know about it, you
can do your own google search.

I think Disney/ABC thinks they can get individual ISPs to pay them
to carry sports audio/video streams. I suppose that would be yet
another multicast stream method, assuming an ISP location had multiple
customers viewing the same stream.

Are other content providers trying to do something similar? How are
operators dealing with this? What opinions are there in the operator
community?

Mr. Dale
Williams, Marc
2008-04-28 21:43:47 UTC
Permalink
Post by Frank Bulk - iNAME
I looked into this for our operations because we do both
(internet and video). The price was reasonable
That's interesting. Under the commercial television broadcast model of
American networks such as ABC, CBS, FOX, NBC, The CW and MyNetworkTV,
affiliates give up portions of their local advertising airtime in
exchange for network programming.
Marc Manthey
2008-05-04 22:34:51 UTC
Permalink
evening all ,

found an related article about the power consumtion saving in ip6.

-

Up to 300 Megawatt Worth of Keepalive Messages to be Saved by IPv6?

http://www.circleid.com/posts/81072_megawatts_keepalive_ipv6/

http://www.niksula.hut.fi/~peronen/publications/haverinen_siren_eronen_vtc2007.pdf


still interested in other links and publications


regards

Marc

--
"Use your imagination not to scare yourself to death
but to inspire yourself to life."

Les enfants teribbles - research and deployment
Marc Manthey - head of research and innovation
Hildeboldplatz 1a D - 50672 Köln - Germany
Tel.:0049-221-3558032
Mobil:0049-1577-3329231
jabber :***@kgraff.net
blog : http://www.let.de
ipv6 http://www.ipsix.org
xing : https://www.xing.com/profile/Marc_Manthey
Adrian Chadd
2008-05-04 22:57:36 UTC
Permalink
Post by Marc Manthey
evening all ,
found an related article about the power consumtion saving in ip6.
-
Up to 300 Megawatt Worth of Keepalive Messages to be Saved by IPv6?
http://www.circleid.com/posts/81072_megawatts_keepalive_ipv6/
http://www.niksula.hut.fi/~peronen/publications/haverinen_siren_eronen_vtc2007.pdf
I'd seriously be looking at making current -software- run more efficiently
before counting ipv6-related power savings.




Adrian
Iljitsch van Beijnum
2008-05-05 08:07:12 UTC
Permalink
Post by Adrian Chadd
I'd seriously be looking at making current -software- run more
efficiently
before counting ipv6-related power savings.
Good luck with that.

Obviously there is a lot to be gained at that end, but that doesn't
mean we should ignore power use in the network. One thing that could
help here is to increase the average packet size. Whenever I've
looked, this has always hovered around 500 bytes for internet traffic.
If we can get jumboframes widely deployed, it should be doable to
double that. Since most work in routers and switches is per-packet
rather than per-bit, this has the potential to save a good amount of
power.

Now obviously this only works in practice if routers and switches
actually use less power when there are fewer packets, which is not a
given. It helps even more if the maximum throughput isn't based on 64-
byte packets. Why do people demand that, anyway? The only thing I can
think of is DoS attacks. But that can be solved by only allowing end-
users to send an average packet size of 500 (or 250, or whatever)
bytes. So if you have a 10 Mbps connection you don't get to send 14000
64-byte packets per second, but a maximum of 2500 packets per second.
So with 64-byte packets you only get to use 1.25 Mbps.

I'm guessing having a 4x10Gbps line card that "only" does 14 Mpps
total rather than 14 Mpps per port would be a good deal cheaper.
Obviously if you're a service provider with a customer that sends 10
Gbps worth of VoIP you can only use one of those 4 ports but somehow,
I'm thinking few people use 10 Gbps worth of VoIP...

Iljitsch

PS. Am I the only one who is annoyed by the reduction in usable
subject space by the superfluous [NANOG]?
Iljitsch van Beijnum
2008-05-05 16:22:54 UTC
Permalink
Post by Iljitsch van Beijnum
Obviously there is a lot to be gained at that end, but that doesn't
mean we should ignore power use in the network. One thing that could
help here is to increase the average packet size. Whenever I've
looked, this has always hovered around 500 bytes for internet
traffic.
If we can get jumboframes widely deployed,
You don't need jumboframes, you just need to have working Path MTU
Discovery.
Or hand-nail your MSS to 1400 or something. But if you don't do
either of
those, you basically need to assume that the minimum MTU is 512 or so.
???

Very few people out there use an MTU significantly below 1500 bytes. A
1500-byte MTU will give you an _average_ packet size of ~1000 on long-
lived TCP flows because there is one tiny ACK for every two full size
data segments. (In the other direction, but let's not make things too
complicated right now.) The reason that the average is more like half
that is that on short interactions the last packet is shorter, and of
course there's stuff like gaming, VoIP, DNS that simply uses small
packets.
Post by Iljitsch van Beijnum
Now obviously this only works in practice if routers and switches
actually use less power when there are fewer packets, which is not a
given. It helps even more if the maximum throughput isn't based on 64-
byte packets. Why do people demand that, anyway?
Max throughput, or max packets/sec? Max data throughput happens at
the
*other* end, with 9K mobygrams...
Right, with 9k packets you only need to send around 13 kpps to fill up
1 Gbps, with 1500 bytes it's some 83 kpps. Helps in overhead, TCP
performance and (potentially) power use.

But someone who is sending a 200 byte packet today isn't going to send
something larger when her MTU is increased from 1500 to 9000 so the
_average_ won't increase by a factor 6.
Post by Iljitsch van Beijnum
PS. Am I the only one who is annoyed by the reduction in usable
subject space by the superfluous [NANOG]?
Those of us who are *really* annoyed by stuff like that usually cook
up a procmail recipe to strip it out.. :)
I got my procmail set up so it mostly does what I need right now,
better not mess with it...
Niels Bakker
2008-05-05 22:57:40 UTC
Permalink
Post by Iljitsch van Beijnum
PS. Am I the only one who is annoyed by the reduction in usable
subject space by the superfluous [NANOG]?
No, and I'm just as annoyed by the (non-McQ) footer with superfluous
information attached to each mail.
Post by Iljitsch van Beijnum
Those of us who are *really* annoyed by stuff like that usually cook
up a procmail recipe to strip it out.. :)
That will only lead to duplicates of "Re: " (before and after the tag)
in the Subject, I'm afraid.


-- Niels.

--
Joel Jaeggli
2008-05-05 00:59:19 UTC
Permalink
Notwithstanding that fact that keepalives are a huge issue for tiny
battery powered devices. There's a false economy in assuming those
packets wouldn't have to be sent with IPV6...
Post by Marc Manthey
evening all ,
found an related article about the power consumtion saving in ip6.
-
Up to 300 Megawatt Worth of Keepalive Messages to be Saved by IPv6?
http://www.circleid.com/posts/81072_megawatts_keepalive_ipv6/
http://www.niksula.hut.fi/~peronen/publications/haverinen_siren_eronen_vtc2007.pdf
still interested in other links and publications
regards
Marc
--
"Use your imagination not to scare yourself to death
but to inspire yourself to life."
Les enfants teribbles - research and deployment
Marc Manthey - head of research and innovation
Hildeboldplatz 1a D - 50672 Köln - Germany
Tel.:0049-221-3558032
Mobil:0049-1577-3329231
blog : http://www.let.de
ipv6 http://www.ipsix.org
xing : https://www.xing.com/profile/Marc_Manthey
_______________________________________________
NANOG mailing list
http://mailman.nanog.org/mailman/listinfo/nanog
Randy Bush
2008-05-05 01:47:49 UTC
Permalink
Post by Marc Manthey
found an related article about the power consumtion saving in ip6.
no, you found an article about bad nat design in a market lacking the
ability to stanardize on a clean one.

if you look, you can also find statements by the same folk explaining
how ipv6 will help prevent car accidents involving falling rocks. yes,
i am serious.

note that i work very hard on ipv6 deployment. i just don't encourage
or support marketing insanity.

randy
Jay Hennigan
2008-04-26 19:12:54 UTC
Permalink
I'm wondering how much content is used TiVo style, not in real time,
but fairly soon thereafter. It might make sense to multicast feeds to
local caches so when people actually want stuff, it doesn't come all
the way across the net.
I think the good folks at Akamai may have already thought of this. :-)

--
Jay Hennigan - CCIE #7880 - Network Engineering - ***@impulse.net
Impulse Internet Service - http://www.impulse.net/
Your local telephone and internet company - 805 884-6323 - WB6RDV
Marc Manthey
2008-04-26 19:21:15 UTC
Permalink
Post by Jay Hennigan
I'm wondering how much content is used TiVo style, not in real time,
but fairly soon thereafter. It might make sense to multicast feeds to
local caches so when people actually want stuff, it doesn't come all
the way across the net.
I think the good folks at Akamai may have already thought of this. :-)
teh

http://research.microsoft.com/~ratul/akamai.html

http://www.akamai.com/html/about/management_dl.html

multicast ?

i have another theory , but i dont talk about it ;)

BUT .....someone mentioned akamai had 13.000 servers, imagine they
just need 100 would this hurt ? ;)

cheers

Marc
m***@bt.com
2008-04-27 17:21:05 UTC
Permalink
I'm wondering how much content is used TiVo style, not in
real time,
but fairly soon thereafter. It might make sense to
multicast feeds to
local caches so when people actually want stuff, it doesn't
come all
the way across the net.
I think the good folks at Akamai may have already thought of this. :-)
Akamai has built a Content Delivery Network (CDN) because they do not
have to rely on any specific ISP or any specific IP network
functionality.
If you go with IP Multicast, or MPLS P2MP(Point to MultiPoint) then you
are limited to only using ISPs who have implemented the right protocols
and who peer using those protocols. P2P is a lot like CDN because it
does not rely on any specific ISP implementation, but as a result of
being 100% free of the ISP, P2P also lacks the knowledge of the network
topology that it needs to be efficient. Of course, a content provider
could leverage P2P by predelivering its content to strategically located
sites in the network, just like they do with a CDN.

IP multicast and P2MP have routing protocols which tell them where to
send content. CDN's are either set up manually or use their own
proprietary
methods to figure out where to send content. P2P currently doesn't care
about topology because it views the net as an amorphous cloud.

NNTP, the historical firehose protocol, just floods it out
to everyone who hasn't seen it yet but actually, the consumers of
an NNTP feed have been set up statically in advance. And this static
setup does include knowledge of ISP's network topology, and knowledge
of the ISP's economic realities. I'd like to see a P2P protocol that
sets up paths dynamically, but allows for inputs as varied as those
old NNTP setups. There was also a time when LAN's had some form of
economic reality configured in, i.e. some users were only allowed
to log into the LAN during certain time periods on certain days.
Is there any ISP that wouldn't want some way to signal P2P clients
how to use spare bandwidth without ruining the network for other
paying customers?

--Michael Dillon
Joel Jaeggli
2008-04-27 18:27:47 UTC
Permalink
Post by m***@bt.com
NNTP, the historical firehose protocol, just floods it out
to everyone who hasn't seen it yet but actually, the consumers of
an NNTP feed have been set up statically in advance. And this static
setup does include knowledge of ISP's network topology, and knowledge
of the ISP's economic realities. I'd like to see a P2P protocol that
sets up paths dynamically, but allows for inputs as varied as those
old NNTP setups. There was also a time when LAN's had some form of
economic reality configured in, i.e. some users were only allowed
to log into the LAN during certain time periods on certain days.
Is there any ISP that wouldn't want some way to signal P2P clients
how to use spare bandwidth without ruining the network for other
paying customers?
I think it's safe to assume that isps are steering p2p traffic for the
purposes of adjusting their ratios on peering and transit links...

while it lacks the intentionality of playing with the usenet
spam/warez/porn firehose a little TE to shift it from one exit to
another when you have lots of choices is presumably a useful knob to have.

Layer violations to tell applications that they should care about some
peers in their overlay network vs others seems like something with a lot
of potential uninteded consequences.
Post by m***@bt.com
--Michael Dillon
_______________________________________________
NANOG mailing list
http://mailman.nanog.org/mailman/listinfo/nanog
Jorge Amodio
2008-04-29 16:31:31 UTC
Permalink
Post by Marc Manthey
Isn´t this an argument for ip6 / greenip6 ;) aswell ?
besides the multicast argument, ipv6 and the transition to it
with dual stacks, etc, etc, afaik will require more horsepower
and memory to handle routing info/updates, don't think so
it will reduce energy consumption au contraire.

one place where major improvements can be made is to
increase the efficiency of switched power supplies on servers
and other gear installed in large datacenters.

My .02
Marc Manthey
2008-04-29 21:48:00 UTC
Permalink
Post by Jorge Amodio
besides the multicast argument,
hi Jorge , all

ok, i was talking about a "campus" installation

imagine you want to broadcast a live event so
10.000 unicast streams and 10.000 multicast stream for example.

from what toni replyed , you need less horsepower with the multicast
streams
Post by Jorge Amodio
For 10000 concurrent unicast streams you'd need not just more servers.
but would like to know how this could be calculated.

my 00.2 ;)

marc

-
Les enfants teribbles - research and deployment
Marc Manthey - Hildeboldplatz 1a
D - 50672 Köln - Germany
Tel.:0049-221-3558032
Mobil:0049-1577-3329231
jabber :***@kgraff.net
blog : http://www.let.de
ipv6 http://www.ipsix.org

Klarmachen zum Ändern!
http://www.piratenpartei-koeln.de
Mike Fedyk
2008-05-05 19:56:38 UTC
Permalink
Post by Frank Bulk - iNAME
-----Original Message-----
think of is DoS attacks. But that can be solved by only allowing end-
users to send an average packet size of 500 (or 250, or whatever)
bytes. So if you have a 10 Mbps connection you don't get to
send 14000
64-byte packets per second, but a maximum of 2500 packets per
second.
So with 64-byte packets you only get to use 1.25 Mbps.
You have just cut out the VoIP industry, TCP setup, IM or most types of
real-time services on the Internet.
Post by Frank Bulk - iNAME
PS. Am I the only one who is annoyed by the reduction in usable
subject space by the superfluous [NANOG]?
Yes you are the only one. ;)
Iljitsch van Beijnum
2008-05-05 20:02:54 UTC
Permalink
Post by Mike Fedyk
Post by Iljitsch van Beijnum
So if you have a 10 Mbps connection you don't get to
send 14000 64-byte packets per second, but a maximum of 2500
packets per
second. So with 64-byte packets you only get to use 1.25 Mbps.
You have just cut out the VoIP industry, TCP setup, IM or most types of
real-time services on the Internet.
Of course not. Like I said, as an average end-user with 10 Mbps you
get to send a maximum of 2500 packets per second. That's plenty to do
VoIP, set up TCP sessions or do IM. You just don't get to send the
full 10 Mbps at this size.
Nathan Ward
2008-05-06 00:07:13 UTC
Permalink
Post by Iljitsch van Beijnum
Of course not. Like I said, as an average end-user with 10 Mbps you
get to send a maximum of 2500 packets per second. That's plenty to do
VoIP, set up TCP sessions or do IM. You just don't get to send the
full 10 Mbps at this size.
Hmm, I see value in that.

But, good luck trying to convince customers to take a pps limitation
in addition to a Mbps limitation, whether they ever exceed that pps or
not. You /might/ convince them to take a pps limitation only - but if
they want to do 30Mbit (ie 2500pps @ 1500b) then your product needs to
support that.

Maybe you just start calling "10Mbps" "10Mbps, assuming a 500b average
packet size."

Anyway, nice idea in theory - putting more real world limitations in
to sold product limitations - but I don't see it working out with
marketing people, etc. unless someone has been doing it for years
already. It'd be good if the world were all engineers though, huh?

--
Nathan Ward
Adrian Chadd
2008-05-06 01:58:46 UTC
Permalink
Post by Nathan Ward
Maybe you just start calling "10Mbps" "10Mbps, assuming a 500b average
packet size."
Anyway, nice idea in theory - putting more real world limitations in
to sold product limitations - but I don't see it working out with
marketing people, etc. unless someone has been doing it for years
already. It'd be good if the world were all engineers though, huh?
NPE-XXX, anyone?



Adrian

Loading...