Discussion:
fair warning: less than 1000 days left to IPv4 exhaustion
(too old to reply)
Mike Leber
2008-05-02 18:51:43 UTC
Permalink
Since nobody mentioned it yet, there are now less than 1000 days projected
until IPv4 exhaustion:

http://www.potaroo.net/tools/ipv4/

Do you have an IPv6 plan?

How long do you think it will be until Sarbanes Oxley and SAS 70 auditors
start requiring disclosure of IPv4 exhaustion as a business continuity
risk, as well as the presence or lack thereof of an IPv6 plan?

When do you plan on telling your customers? (afterwards?)

Ahhh, you don't have any customers that have to plan to buy equipment 2
years in advance. Ok, I understand.

Mike.
ps. 1000 days assumes no rush, speculation, or hoarding. Do people do
that?

pps. Of course these are provocative comments for amusement. :)

ppps. Or not if you don't have any kind of IPv6 plan. Sorry, sorry...

+----------------- H U R R I C A N E - E L E C T R I C -----------------+
| Mike Leber Wholesale IPv4 and IPv6 Transit 510 580 4100 |
| Hurricane Electric Web Hosting Colocation AS6939 |
| ***@he.net http://he.net |
+-----------------------------------------------------------------------+
Deepak Jain
2008-05-02 21:53:15 UTC
Permalink
Post by Mike Leber
ppps. Or not if you don't have any kind of IPv6 plan. Sorry, sorry...
Does it take most network operators more than 1000 days to make an IPv6
plan and start implementing it?

I suppose there is always some network running obsolete gear out
somewhere, but their upstream guy may provide them something to avoid
the pain (like reclaimed v4 space) or a gateway or other service.

I guess another way to say it is... if you can afford for the planning
and implementation to have so many layers of sign-off and buy-in it
takes years, you can afford the costs, in everything else, to implement it.

Not to mention, piggyback off of all the published BCPs, improved tools
and software, and other things that 2 more years will provide.

Deepak Jain
AiNET
James R. Cutler
2008-05-02 22:09:25 UTC
Permalink
Yes -- spent mostly on getting management approval.
Post by Deepak Jain
Post by Deepak Jain
Does it take most network operators more than 1000 days to make an IPv6
plan and start implementing it?
Randy Bush
2008-05-02 22:15:15 UTC
Permalink
back office software
ip and dns management software
provisioning tools
cpe
measurement and monitoring and billing

and, of course, backbone and aggregation equipment that can actually
handle real ipv6 traffic flows with acls and chocolate syrup.

randy
Mikael Abrahamsson
2008-05-03 07:02:01 UTC
Permalink
Post by Randy Bush
back office software
ip and dns management software
provisioning tools
cpe
measurement and monitoring and billing
and, of course, backbone and aggregation equipment that can actually
handle real ipv6 traffic flows with acls and chocolate syrup.
Not to mention, you want to be able to do the regular antispoofing etc and
your security devices (which might be based on L2 switches doing DHCP
snooping) doesn't do IPv6, so you need to replace them (or live with lower
security) and this needs serious budget.
--
Mikael Abrahamsson email: ***@swm.pp.se
Joel Jaeggli
2008-05-03 07:14:45 UTC
Permalink
Post by Mikael Abrahamsson
Post by Randy Bush
back office software
ip and dns management software
provisioning tools
cpe
measurement and monitoring and billing
and, of course, backbone and aggregation equipment that can actually
handle real ipv6 traffic flows with acls and chocolate syrup.
Not to mention, you want to be able to do the regular antispoofing etc and
your security devices (which might be based on L2 switches doing DHCP
snooping) doesn't do IPv6, so you need to replace them (or live with lower
security) and this needs serious budget.
Or you'll have to revert to what you did before dhcp filtering switches.

Which was watch for replies from rogues and then update your mac filters
accordingly or drop the host onto a quarantine vlan. should work quite
well for rogue RAs and rogue dhcpv6 servers.

Obviously it's reactive rather than proactive but it can be quite
effective if automated.
Robert E. Seastrom
2008-05-08 00:22:06 UTC
Permalink
Post by Randy Bush
back office software
ip and dns management software
provisioning tools
cpe
measurement and monitoring and billing
and, of course, backbone and aggregation equipment that can actually
handle real ipv6 traffic flows with acls and chocolate syrup.
chiming in late here... the situation on the edge (been looking at a
lot of gpon gear lately) is pretty dismal.

i won't bother mentioning the vendor who claimed their igmp
implementation supported ipv6 "just fine - we're a layer 2 device;
it's plug-and-play". srsly.

---rob
Geoff Huston
2008-05-04 03:10:06 UTC
Permalink
Post by Mike Leber
Since nobody mentioned it yet, there are now less than 1000 days projected
http://www.potaroo.net/tools/ipv4/
....
Post by Mike Leber
ps. 1000 days assumes no rush, speculation, or hoarding. Do people do
that?
pps. Of course these are provocative comments for amusement. :)
I keep on saying: its just a mathematical model, and the way this will play
out is invariably different from our best guesses. So to say "well there's
x days to go" is somewhat misleading as it appears to vest this model
with some air of authority about the future, and that's not a good idea!

IPv4 address allocation is a rather skewed distribution. Most address
allocations are relatively small, but a small number of them are relatively
large. Its the the timing of this smaller set of actors who are undertaking
large deployments that will ultimately determine how this plays out. It
could be a lot faster than 1000 days, or it could be slower - its very
uncertain. There could be some "last minute rush." There could be a change
in policies over remaining address pools as the pool diminishes, or ....

So, yes, the pool is visibly draining and you now can see all the way to
the bottom. And it looks like there are around 3 years to go ...
but thats with an uncertainty factor of at least +/- about 1 1/2 years.

regards,

Geoff
William Warren
2008-05-04 03:22:26 UTC
Permalink
That also doesn't take into account how many /8's are being hoarded by
organizations that don't need even 25% of that space.
Post by Geoff Huston
Post by Mike Leber
Since nobody mentioned it yet, there are now less than 1000 days projected
http://www.potaroo.net/tools/ipv4/
....
Post by Mike Leber
ps. 1000 days assumes no rush, speculation, or hoarding. Do people do
that?
pps. Of course these are provocative comments for amusement. :)
I keep on saying: its just a mathematical model, and the way this will play
out is invariably different from our best guesses. So to say "well there's
x days to go" is somewhat misleading as it appears to vest this model
with some air of authority about the future, and that's not a good idea!
IPv4 address allocation is a rather skewed distribution. Most address
allocations are relatively small, but a small number of them are relatively
large. Its the the timing of this smaller set of actors who are undertaking
large deployments that will ultimately determine how this plays out. It
could be a lot faster than 1000 days, or it could be slower - its very
uncertain. There could be some "last minute rush." There could be a change
in policies over remaining address pools as the pool diminishes, or ....
So, yes, the pool is visibly draining and you now can see all the way to
the bottom. And it looks like there are around 3 years to go ...
but thats with an uncertainty factor of at least +/- about 1 1/2 years.
regards,
Geoff
_______________________________________________
NANOG mailing list
http://mailman.nanog.org/mailman/listinfo/nanog
--
Registered Microsoft Partner

My "Foundation" verse:
Isa 54:17
Nathan Ward
2008-05-04 03:35:59 UTC
Permalink
Post by William Warren
That also doesn't take into account how many /8's are being hoarded by
organizations that don't need even 25% of that space.
Unless you're expecting those organisations to be really nice and make
that address space available to other organisations (ie. their RIR/
LIR, or the highest bidder on ebay), then I don't see how that's
relevant - whether they've got machines on those addresses or not,
from an outsider's point of view the address space is unavailable for
them to use.

..or, maybe your thought is that at some point these guys will start
using addresses in those /8s, and stop requesting new allocations from
their RIR/LIR, which will in turn slow down IPv4 allocations? I'm not
sure, but licking my finger and sticking it out the window suggests
that allocations to those with little-utilised /8s is a fairly small
percentage.

--
Nathan Ward
Paul Vixie
2008-05-04 16:39:17 UTC
Permalink
Post by Nathan Ward
Post by William Warren
That also doesn't take into account how many /8's are being hoarded by
organizations that don't need even 25% of that space.
Unless you're expecting those organisations to be really nice and make
that address space available to other organisations (ie. their RIR/
LIR, or the highest bidder on ebay), ...
first, a parable:

in datacenters, it used to be that the scarce resource was rack space, but
then it was connectivity, and now it's power/heat/cooling. there are fallow
fields of empty racks too far from fiber routes or power grids to be filled,
all because the scarcity selector has moved over time. some folks who were
previously close to fiber routes and/or power grids found that they could
do greenfield construction and that the customers would naturally move in,
since too much older datacenter capacity was unusable by modern standards.

then, a recounting:

michael dillon asked a while back what could happen if MIT (holding 18/8)
were to go into the ISP business, offering dialup and/or tunnel/VPN access,
and bundling a /24 with each connection, and allowing each customer to
multihome if they so chose. nobody could think of an RIR rule, or an ISP
rule, or indeed anything else that could prevent this from occurring. now,
i don't think that MIT would do this, since it would be a distraction for
them, and they probably don't need the money, and they're good guys, anyway.

now, a prediction:

but if the bottom feeding scumsuckers who saw the opportunity now known as
spam, or the ones who saw the opportunity now known as NXDOMAIN remapping,
or the ones who saw the opportunity now known as DDoS for hire, realize that
the next great weakness in the internet's design and protocols is explosive
deaggregation by virtual shill networking, then we can expect business plans
whereby well suited shysters march into MIT, and HP, and so on, offering to
outsource this monetization. "you get half the money but none of the
distraction, all you have to do is renumber or use NAT or IPv6, we'll do
the rest." nothing in recorded human history argues against this occurring.
--
Paul Vixie
Tomas L. Byrnes
2008-05-04 18:37:11 UTC
Permalink
I'm not sure that I would tar everyone who does NXDOMAIN remapping with
the same brush as SPAM and DDOS. Handled the way OpenDNS does, on an
opt-in basis, it's a "good thing" IMO.

I would also say that disaggregating and remarketing dark address space,
assuming it's handled above board and in a way that doesn't break the
'net, could be a "very good thing". The artifact of MIT and others
having /8s while the entire Indian subcontinent scrapes for /29s, can
hardly be considered optimal or right. It's time for the supposedly
altruistic good guys to do the right thing, and give back the resources
they are not using, that are sorely needed. How about they resell it and
use the money to make getting an education affordable?

The routing prefix problem, OTOH, is an artificial shortage caused by
(mostly one) commercial entities maximizing their bottom line by
producing products that were obviously underpowered at the time they
were designed, so as to minimize component costs, and ensure users
upgraded due to planned obsolescence.

Can you give me a good technical reason, in this day of 128 bit network
processors that can handle 10GigE, why remapping the entire IPv4 address
space into /27s and propagating all the prefixes is a real engineering
problem? Especially if those end-points are relatively stable as to
connectivity, the allocations are non-portable, and you aggregate.

How is fork-lifting the existing garbage for better IPv4 routers any
worse than migrating to IPv6? At least with an IPv4 infrastructure
overhaul, it's relatively transparent to the end user. It's not
either/or anyway. Ideally you would have an IPv6 capable router that
could do IPv4 without being babied as to prefix table size or update
rate.

IPv4 has enough addresses for every computer on Earth, and then some.

That having been said, I think going to IPv6 has a lot of other benefits
that make it worthwhile.

YMMV, IANAL, yadda yadda yadda
-----Original Message-----
Sent: Sunday, May 04, 2008 9:39 AM
Subject: Re: [NANOG] fair warning: less than 1000 days left to IPv4
Post by Nathan Ward
Post by William Warren
That also doesn't take into account how many /8's are
being hoarded
Post by Nathan Ward
Post by William Warren
by organizations that don't need even 25% of that space.
Unless you're expecting those organisations to be really
nice and make
Post by Nathan Ward
that address space available to other organisations (ie. their RIR/
LIR, or the highest bidder on ebay), ...
in datacenters, it used to be that the scarce resource was
rack space, but then it was connectivity, and now it's
power/heat/cooling. there are fallow fields of empty racks
too far from fiber routes or power grids to be filled, all
because the scarcity selector has moved over time. some
folks who were previously close to fiber routes and/or power
grids found that they could do greenfield construction and
that the customers would naturally move in, since too much
older datacenter capacity was unusable by modern standards.
michael dillon asked a while back what could happen if MIT
(holding 18/8) were to go into the ISP business, offering
dialup and/or tunnel/VPN access, and bundling a /24 with each
connection, and allowing each customer to multihome if they
so chose. nobody could think of an RIR rule, or an ISP rule,
or indeed anything else that could prevent this from
occurring. now, i don't think that MIT would do this, since
it would be a distraction for them, and they probably don't
need the money, and they're good guys, anyway.
but if the bottom feeding scumsuckers who saw the opportunity
now known as spam, or the ones who saw the opportunity now
known as NXDOMAIN remapping, or the ones who saw the
opportunity now known as DDoS for hire, realize that the next
great weakness in the internet's design and protocols is
explosive deaggregation by virtual shill networking, then we
can expect business plans whereby well suited shysters march
into MIT, and HP, and so on, offering to outsource this
monetization. "you get half the money but none of the
distraction, all you have to do is renumber or use NAT or
IPv6, we'll do the rest." nothing in recorded human history
argues against this occurring.
--
Paul Vixie
_______________________________________________
NANOG mailing list
http://mailman.nanog.org/mailman/listinfo/nanog
Paul Vixie
2008-05-04 19:08:31 UTC
Permalink
Post by Tomas L. Byrnes
I'm not sure that I would tar everyone who does NXDOMAIN remapping with
the same brush as SPAM and DDOS. Handled the way OpenDNS does, on an
opt-in basis, it's a "good thing" IMO.
i agree, and i'm on record as saying that since opendns doesn't affect the
people who do not knowingly sign up for it, and that it's free even to folks
who opt out of the remapping, it is not an example of inappropriate trust
monetization (as it would be if your hotel or ISP did it do you without your
consent, or, offered you no alternative, or, offered you no opt-out.)
Post by Tomas L. Byrnes
I would also say that disaggregating and remarketing dark address space,
assuming it's handled above board and in a way that doesn't break the
'net, could be a "very good thing".
that's a "very big if".
Post by Tomas L. Byrnes
The routing prefix problem, OTOH, is an artificial shortage caused by
(mostly one) commercial entities maximizing their bottom line by
producing products that were obviously underpowered at the time they
were designed, so as to minimize component costs, and ensure users
upgraded due to planned obsolescence.
i completely disagree, but, assuming you were right, what do you propose do
do about it, or propose that we all do about it, to avoid having it lead
to some kind of global meltdown if new prefixes start appearing "too fast"?
Post by Tomas L. Byrnes
Can you give me a good technical reason, in this day of 128 bit network
processors that can handle 10GigE, why remapping the entire IPv4 address
space into /27s and propagating all the prefixes is a real engineering
problem? Especially if those end-points are relatively stable as to
connectivity, the allocations are non-portable, and you aggregate.
you almost had me there. i was going to quote some stuff i remember tony li
saying about routing physics at the denver ARIN meeting, and i was going to
explain three year depreciation cycles, global footprints, training, release
trains, and some graph theory stuff like number of edges, number of nodes,
size of edge, natural instability. couldn't been fun, especially since many
people on this mailing list know the topic better than i do and we could've
gone all week with folks correcting eachother in the ways they corrected me.

but the endpoints aren't "stable" at all, not even "relatively." and the
allocations are naturally "portable". and "aggregation" won't be occurring.
so, rather than answer your "technical reason" question, i'll say, we're in
a same planet different worlds scenario here. we don't share assumptions
that would make a joint knowledge quest fruitful.
Post by Tomas L. Byrnes
How is fork-lifting the existing garbage for better IPv4 routers any
worse than migrating to IPv6? At least with an IPv4 infrastructure
overhaul, it's relatively transparent to the end user. It's not
either/or anyway. Ideally you would have an IPv6 capable router that
could do IPv4 without being babied as to prefix table size or update
rate.
forklifting in routers that can speak ipv6 means that when we're done, the
new best-known limiting factor to internet growth will be something other
than the size of the address space. and noting that the lesser-known factor
that's actually much more real and much more important is number of prefixes,
there is some hope that the resulting ipv6 table won't have quite as much
nearly-pure crap in it as the current ipv4 has. eventually we will of course
fill it with TE, but by the time that can happen, routing physics will have
improved some. my hope is that by the time a midlevel third tier multihomed
ISP needs a dozen two-megaroute dual stack 500Gbit/sec routers to keep up
with other people's TE routes, then, such things will be available on e-bay.

everything about IP is transparent to the end user. they just want to click
on stuff and get action at a distance. dual stack ipv4/ipv6 does that pretty
well already, for those running macos, vista, linux, or bsd, whose providers
and SOHO boxes are offering dual-stack. there's reason to expect that end
users will continue to neither know nor care what kind of IP they are using,
whether ipv6 takes off, or doesn't.
Post by Tomas L. Byrnes
IPv4 has enough addresses for every computer on Earth, and then some.
if only we didn't need IP addresses for every coffee cup, light switch,
door knob, power outlet, TV remote control, cell phone, and so on, then we
could almost certainly live with IPv4 and NAT. however, i'd like to stay
on track toward digitizing everything, wiring most stuff, unwiring the rest,
and otherwise making a true internet of everything in the real world, and
not just the world's computers.
Post by Tomas L. Byrnes
That having been said, I think going to IPv6 has a lot of other benefits
that make it worthwhile.
me too.
Joel Jaeggli
2008-05-04 19:12:48 UTC
Permalink
Post by Tomas L. Byrnes
IPv4 has enough addresses for every computer on Earth, and then some.
There are approximately 3.4 billion or a little less usable ip
addresses. there are 3.3 billion mobile phone users buying approximately
400,000 ip capable devices a day. That's a single industy,
notwithstanding how the are presently employed what do you think those
deployments are going to look like in 5 years? in 10?

How many ip addresses do you need to nat 100 million customers? how much
state do you have to carry to do port demux for their traffic?

I guess making it all scale is someone else's problem...
Post by Tomas L. Byrnes
That having been said, I think going to IPv6 has a lot of other benefits
that make it worthwhile.
YMMV, IANAL, yadda yadda yadda
-----Original Message-----
Sent: Sunday, May 04, 2008 9:39 AM
Subject: Re: [NANOG] fair warning: less than 1000 days left to IPv4
Post by Nathan Ward
Post by William Warren
That also doesn't take into account how many /8's are
being hoarded
Post by Nathan Ward
Post by William Warren
by organizations that don't need even 25% of that space.
Unless you're expecting those organisations to be really
nice and make
Post by Nathan Ward
that address space available to other organisations (ie. their RIR/
LIR, or the highest bidder on ebay), ...
in datacenters, it used to be that the scarce resource was
rack space, but then it was connectivity, and now it's
power/heat/cooling. there are fallow fields of empty racks
too far from fiber routes or power grids to be filled, all
because the scarcity selector has moved over time. some
folks who were previously close to fiber routes and/or power
grids found that they could do greenfield construction and
that the customers would naturally move in, since too much
older datacenter capacity was unusable by modern standards.
michael dillon asked a while back what could happen if MIT
(holding 18/8) were to go into the ISP business, offering
dialup and/or tunnel/VPN access, and bundling a /24 with each
connection, and allowing each customer to multihome if they
so chose. nobody could think of an RIR rule, or an ISP rule,
or indeed anything else that could prevent this from
occurring. now, i don't think that MIT would do this, since
it would be a distraction for them, and they probably don't
need the money, and they're good guys, anyway.
but if the bottom feeding scumsuckers who saw the opportunity
now known as spam, or the ones who saw the opportunity now
known as NXDOMAIN remapping, or the ones who saw the
opportunity now known as DDoS for hire, realize that the next
great weakness in the internet's design and protocols is
explosive deaggregation by virtual shill networking, then we
can expect business plans whereby well suited shysters march
into MIT, and HP, and so on, offering to outsource this
monetization. "you get half the money but none of the
distraction, all you have to do is renumber or use NAT or
IPv6, we'll do the rest." nothing in recorded human history
argues against this occurring.
--
Paul Vixie
_______________________________________________
NANOG mailing list
http://mailman.nanog.org/mailman/listinfo/nanog
_______________________________________________
NANOG mailing list
http://mailman.nanog.org/mailman/listinfo/nanog
David Conrad
2008-05-05 03:21:28 UTC
Permalink
Post by Tomas L. Byrnes
The artifact of MIT and others
having /8s while the entire Indian subcontinent scrapes for /29s, can
hardly be considered optimal or right.
While perhaps intended as hyperbole, this sort of statement annoys me
as it demonstrates an ignorance of how address allocation mechanisms
work. It may be the case that organizations in India (usually people
cite China, but whatever) might "scrape for /29s", but that is not
because of a lack of address space at APNIC, but rather policies
imposed by the carrier(s)/PTT/government.
Post by Tomas L. Byrnes
It's time for the supposedly
altruistic good guys to do the right thing, and give back the
resources
they are not using, that are sorely needed.
"For the good of the Internet" died some while back. There is
currently no incentive for anyone with more address space than they
need to return that address space.
Post by Tomas L. Byrnes
How about they resell it and
use the money to make getting an education affordable?
If you believe this appropriate, I suggest you raise it on
Post by Tomas L. Byrnes
The routing prefix problem, OTOH, is an artificial shortage caused by
(mostly one) commercial entities maximizing their bottom line
[...]
Especially if those end-points are relatively stable as to
connectivity, the allocations are non-portable, and you aggregate.
A free market doesn't work like that, prefixes aren't stable, and the
problem is that you can't aggregate. If you're actually interested in
this topic, I might suggest looking at the IRTF RRG working group
archives.
Post by Tomas L. Byrnes
IPv4 has enough addresses for every computer on Earth, and then some.
Unless you NAT out every bodily orifice, not even close.

Regards,
-drc
Randy Bush
2008-05-05 03:00:32 UTC
Permalink
Post by Paul Vixie
but if the bottom feeding scumsuckers who saw the opportunity now known as
spam, or the ones who saw the opportunity now known as NXDOMAIN remapping,
or the ones who saw the opportunity now known as DDoS for hire, realize that
the next great weakness in the internet's design and protocols is explosive
deaggregation by virtual shill networking, then we can expect business plans
whereby well suited shysters march into MIT, and HP, and so on, offering to
outsource this monetization. "you get half the money but none of the
distraction, all you have to do is renumber or use NAT or IPv6, we'll do
the rest." nothing in recorded human history argues against this occurring.
paul, this is not the spanish inquisition or the great crusades.
nothing in human history argues against a lot of fantasies and black
helicopters. and yes, some of them actually come true, c.f. iraq. but
i have a business to run, not a religious crusade. there is no news at
eleven, just more work to do.

some time back what we now call legacy space was given out under
policies which seemed like a good idea at the time. [ interestingly,
these policies were similar to the policies being used or considered for
ipv6 allocations today, what we later think of as large chunks that may
or may not be really well utilized. have you seen the proposal in ripe
to give everyone with v4 space a big chunk of v6 space whether they want
it or not? ] the people who gave those allocations and the people (or
organizations) who received them were not evil, stupid, or greedy. they
were just early adopters, incurring the risks and occasional benefits.

maybe it benefits arin's desperate search for a post-ipv4-free-pool era
business model to cast these allocation holders as evil (see the video
of arin's lawyer at nanog and some silly messages on the arin ppml
list), with the fantasy that there is enough legacy space that arin can
survive with its old business model for an extra year or two. i think
of this as analogous to the record companies sending the lawyers out
instead of joining the 21st century and getting on the front of the
wave. i hope that the result in arin's case is not analogously tragic.

arin's legacy registration agreement is quite lopsided, as has been
pointed out multiple times. the holder grants and gives up rights and
gains little they do not already have. but i am sure there will be some
who will sign it. heck, some people click on phishing links.

i suggest we focus on how to roll out v6 or give up and get massive
natting to work well (yuchhh!) and not waste our time rearranging the
deck chairs [0] or characterizing those with chairs as evil.

randy

---

[0] my wife used to admonish folk to think about those fools on the
titanic who declined dessert.
Joel Jaeggli
2008-05-04 03:37:28 UTC
Permalink
Post by William Warren
That also doesn't take into account how many /8's are being hoarded by
organizations that don't need even 25% of that space.
which one's would those be?

legacy class A address space just isn't that big...
Post by William Warren
Post by Geoff Huston
Post by Mike Leber
Since nobody mentioned it yet, there are now less than 1000 days projected
http://www.potaroo.net/tools/ipv4/
....
Post by Mike Leber
ps. 1000 days assumes no rush, speculation, or hoarding. Do people do
that?
pps. Of course these are provocative comments for amusement. :)
I keep on saying: its just a mathematical model, and the way this will play
out is invariably different from our best guesses. So to say "well there's
x days to go" is somewhat misleading as it appears to vest this model
with some air of authority about the future, and that's not a good idea!
IPv4 address allocation is a rather skewed distribution. Most address
allocations are relatively small, but a small number of them are relatively
large. Its the the timing of this smaller set of actors who are undertaking
large deployments that will ultimately determine how this plays out. It
could be a lot faster than 1000 days, or it could be slower - its very
uncertain. There could be some "last minute rush." There could be a change
in policies over remaining address pools as the pool diminishes, or ....
So, yes, the pool is visibly draining and you now can see all the way to
the bottom. And it looks like there are around 3 years to go ...
but thats with an uncertainty factor of at least +/- about 1 1/2 years.
regards,
Geoff
_______________________________________________
NANOG mailing list
http://mailman.nanog.org/mailman/listinfo/nanog
Suresh Ramasubramanian
2008-05-04 04:53:17 UTC
Permalink
Let's think smaller. /16 shall we say?

Like the /16 here. Originally the SRI / ARPANET SF Bay Packet Radio
network that started back in 1977. Now controlled by a shell company
belonging to a shell company belonging to a "high volume email
deployer" :)

http://blog.washingtonpost.com/securityfix/2008/04/a_case_of_network_identity_the_1.html

srs
Post by Joel Jaeggli
Post by William Warren
That also doesn't take into account how many /8's are being hoarded by
organizations that don't need even 25% of that space.
which one's would those be?
legacy class A address space just isn't that big...
Justin Shore
2008-05-09 16:05:20 UTC
Permalink
Post by Suresh Ramasubramanian
Let's think smaller. /16 shall we say?
Like the /16 here. Originally the SRI / ARPANET SF Bay Packet Radio
network that started back in 1977. Now controlled by a shell company
belonging to a shell company belonging to a "high volume email
deployer" :)
http://blog.washingtonpost.com/securityfix/2008/04/a_case_of_network_identity_the_1.html
Which leads me to ask an OT but slightly related question. How do other
SPs handle the blacklisting of ASNs (not prefixes but entire ASNs). The
/16 that Suresh mentioned here is being originated by a well-known spam
factory. All prefixes originating from that AS could safely be assumed
to be undesirable IMHO and can be dropped. A little Googling for that
/16 brings up a lot of good info including:

http://groups.google.com/group/news.admin.net-abuse.email/msg/5d3e3f89bb148a4c

Does anyone have any good tricks for filtering on AS path that they'd
like to share? I already have my RTBH set up so setting the next-hop
for all routes originating from a given ASN to one of my blackhole
routes (to null0, a sinkhole or srubber) would be ideal. Not accepting
the route period and letting uRPF drop traffic would be ok too.

Justin

David Conrad
2008-05-05 03:01:09 UTC
Permalink
Post by Joel Jaeggli
Post by William Warren
That also doesn't take into account how many /8's are being hoarded by
organizations that don't need even 25% of that space.
which one's would those be?
While I wouldn't call it hoarding, can any single (non-ISP)
organization actually justify a /8? How many students does MIT have
again?
Post by Joel Jaeggli
legacy class A address space just isn't that big...
There is more legacy space (IANA_Registry + VARIOUS, using Geoff's
labels) than all space allocated by the RIRs combined.

Regards,
-drc
Patrick W. Gilmore
2008-05-05 03:22:19 UTC
Permalink
Post by David Conrad
Post by Joel Jaeggli
Post by William Warren
That also doesn't take into account how many /8's are being hoarded by
organizations that don't need even 25% of that space.
which one's would those be?
While I wouldn't call it hoarding, can any single (non-ISP)
organization actually justify a /8? How many students does MIT have
again?
<http://en.wikipedia.org/wiki/Massachusetts_Institute_of_Technology#Student_demographics
<quote>
MIT enrolls more graduate students (approximately 6,000 in total) than
undergraduates (approximately 4,000).
</quote>

Let's assume 2 staff/faculty per student (don't we wish :). So that
would be 30K total. Let's further assume 100 IP addresses per student
to deal with laptops, server, other computers, routers, etc. We're
now at 330K.

That's no where near 25% of the /8 they have. Good thing they are
announcing a /15, /16, and a /24* originated from their ASN too.

Just so we are clear, I have no idea how many servers, computers, or
other things MIT might have to justify a /8, /15, /16, and /24. I'm
just pointing out the number of students alone clearly doesn't justify
their IP space.

UCLA, where the Internet was invented, only has 5x/16 + 2x/24.
Obviously they're so much smarter they can utilize IP space better.
(No, I'm not saying that just 'cause I went to UCLA. :)
--
TTFN,
patrick


* 18.0.0.0
* 128.30.0.0/15
* 128.52.0.0
* 192.233.33.0
Post by David Conrad
Post by Joel Jaeggli
legacy class A address space just isn't that big...
There is more legacy space (IANA_Registry + VARIOUS, using Geoff's
labels) than all space allocated by the RIRs combined.
Regards,
-drc
_______________________________________________
NANOG mailing list
http://mailman.nanog.org/mailman/listinfo/nanog
Iljitsch van Beijnum
2008-05-05 08:55:31 UTC
Permalink
Post by Mike Leber
Since nobody mentioned it yet, there are now less than 1000 days projected
http://www.potaroo.net/tools/ipv4/
Unfortunately that won't load for me over IPv6, path MTU black hole...
Post by Mike Leber
ps. 1000 days assumes no rush, speculation, or hoarding. Do people do
that?
Since the only people who can get really large blocks of IP addresses
are the people who already have really large blocks of IP addresses,
the eventual distribution of large blocks won't differ much depending
on whether there will be a rush or not. Obviously the 99% of requests
that use up only 17% of the space each year are of no importance in
the grand scheme of things.

I was about to write that 1000 days is too optimistic/pessimistic, but
(after trying to compensate for ARIN's strange book keeping practices)
it looks like in 2006, 163 million addresses were given out, 196 in
2007. If the next few years also see an increase of 20% in yearly
address use, then 1000 days sounds about right.

That means we'd have to use up 235 million addresses this year, while
so far we're at 73 million, which puts us on track for 219 million. So
maybe it will be 1050 days (which leaves us exactly a million
addresses per day).

BTW, about the India thing: they should take their cue from China,
which only had a few million addresses at the turn of the century but
is now in the number two spot at ~ 150 million addresses. (Comparison:
the US holds 1.4 billion, India 15 million, just behind Sweden which
has 17 million.) China is now the biggest user of new address space.

http://www.bgpexpert.com/addressespercountry.php
http://www.bgpexpert.com/ianaglobalpool.php
http://www.bgpexpert.com/addrspace2007.php

(Make it "www.ipv4.bgpexpert..." if you have trouble reaching the site
over v6.)
Loading...