Archives

These are unedited transcripts and may contain errors.


Plenary Session
14 May, 2013
2 p.m..


CHAIR: We are already one minute over, we're late. I think we should just shut this thing downright now because it's all a failure. Welcome back to RIPE.

CHAIR: We are going to start the some lent ?? exciting period of immediately following lunch as you digest your carbohydrates with a series of lightning talks to spark your interest and perhaps keep you awake. If you are unable to stay awake. Please sleep quietly. Do not disturb your neighbours.

We are going to start with Niall O'Reilly who is going to confuse us and it's going to be

NIALL O'REILLY: And for a special treat for the stenographer, this is going to be not a lightning talk by a lightening silence because I am not going to tell you anything you know and most of you are board stiff hearing my voice anyway so take your knows out of your e?mail, watch the slides and think about them.

CHAIR: All questions will have to be delivered in the format of a picture.

That's not true at all. Are there any questions?

What does it all mean?

NIALL O'REILLY: I think by the reaction to the almost last slide, people got the point. There is no point labouring it. Everybody's favourite X K CD cartoons are the ones without the words. I think ?? the only thing I'd add is this isn't just applicable in the RIPE community but in lots of other communities and if we don't lose this point. We're likely to do better than some of the other ones.

AUDIENCE SPEAKER: My question is: Do those slides mean you either abolished the IETF or get rid of the IETF?

NIALL O'REILLY: Definitely.

CHAIR: Well paid, sir. Thank you, Neill.


CHAIR: Excellent, we should do more lightning talks like this, this is fantastic. Next up we have Erik Bais who is going to talk from [Sands Ditio] with an interesting request.



ERIK BAIS: I am going to give a short introduction about ICS ASNs and specifically what ICS does, and after that, I am going to ask you all for some input in that.

So, is anybody in the room familiar with ASNs or the ICS? All right, very good. So, ASNs is a training organisation, and they also provide certifications and you know, similar is what the CISSP is doing, and one of the give back to the community initiatives that they were doing is the storm centre. The Internet storm centre is basically a group of volunteers that basically provide information, they gather information from the D shield project and basically report to that back to the community, and currently there are about 30 handlers doing that. And it started already back in 1999, and it has some, you know ?? it basically evolved into the current organisational structure that they have.

So the ICS is basically doing this day to day monitoring of what's happening on the Internet, and specifically, you know, looking at port scans, type of compacts and report on that on a daily basis. One of the handlers is basically the handler of the day, and he will actually do short posts on the ICS website reporting about you know what's going on in the security world, what they see. Now, this is also the limitation of it, because they don't see anything or not everything, they see a lot but not everything.

And a lot of that information that they have comes basically from the D Shield project, and D Shield basically collects information, logs files, they centralise that and from there on they are actually gathering that information, what kind of attacks do we see on residential gateways? And they have a lot. I think currently they already have about 400,000 devices logging into D Shield. And ?? so, with that, they actually grew that type of, the logging, they added another project that was the honey pot project, and they are actually you know starting to, you know, increase and gather more information, see what they can get out of this.

The information from D Shield is basically everybody, you know, that want to use that information, they can request for that as long as they get credit for it, it's pretty open, they also have some very neat APIs. Personally I look to use the API for finding abuse contacts after DDoS attacks and those kind of things for IP address because the API for instance that they they have, it provides the AS number, the abuse contacts and those kind of things, and you know, it's just one of the scripts that we use for this kind of thing in our own network and in our network, we also have some testing done, together with the guy its from the ICS. Basically, we ?? the ICS provided us with a box to put up in our network, and they ask, can we actually get your allocations pointing to this box so that all the unused resources are actually looked at by the sense or, so we have some IP space which is not in use and as soon as we actually do an assignment to a customer, you know, it's not pointing to the sent sore any more and that actually provided a lot of interesting information for the handlers. Not only that we're doing this for VIX, for v4 but also for VIX.

So the idea that we want to pitch today and this basically came when I met with [Jan Zulrich], who is the CEO, three weeks ago, is would it be awesome if we can actually get multiple VPSes in various networks, to actually see if we can get other ISPs in the community to actually provide their allocations to a sensor, so that we can actually get more information to the D Shield project, and specifically this is different than what we're doing normally with the D Shield project, because D Shield project, it basically, it's residential gateways, and you can only see, you know ?? you need to be very lucky to see hits on multiple residential gateways which are also participating in the D Shield project, now for those sent source, it's going to be easier, because you'll actually have a sensor with a lot of IP space pointing to it, and typical port scans, they go through the whole range of that ISP. So, you'll see bursts of port scans, once they actually go through your network and those will all be linked in that box and locked, so it's much easier to see what's actually going in. So this is basically the introduction site and this is the, you know, let's back a bit. So, we're looking for ISPs that are willing to participate here, and actually want to help out on the ICS with the VPS pointing their allocation to say it for the unused space. And you know, obviously, in the back, what you get out of it. So, we'll actually be able to provide custom reports on what's going on on your network on the unused space, and it will actually provide a much quicker insight on new scans globally.

So, the pots we're looking at will be latest releases. We can actually provide the VM wear, or the VPS images. It's going to be low interaction honey pot, basically it's all closed except for SSH, and dual stack preferred I would say almost recommended to have both v6 and v4 on it and basically everybody that wants to participate in this, please do send and e?mail at this address, and at that point the guys from ASNs will pick it up very quickly.

I have some additional information on the URLs on the slides, specifically the presentation with the blue URL here was a presentation by has been us you will Rick, that actually provides a very good insight on how D Shields and ICS work together and basically how it actually works in the real world, because people can actually look into the D Shield data to see if whatever they are seeing in their logs, if that actually is something unique or quite common.

So that's pretty interesting. I also listed the API URL for people that want to have a look at it.

So, the question now is, who has the first question or the first VPS?

CHAIR: Thank you very much. Questions about this project, public offers to assist?

AUDIENCE SPEAKER: Hi, David Freedman from Claranet. I'd like to know what has all of this unused IPv4 space to spare? That was my only question. Thank you.

CHAIR: That was more of a snark than a question. Are there any ??

ERIK BAIS: It may not only be for v4, some of you actually have some v4 left, but also, you know, if you can provide it just for v6 I am more than happy to take your VPS.

CHAIR: No questions, thank you very much.

CHAIR: Next up we have Tomas Hlavacek who will be talking about the Universal Looking Glass. Universal is a very tall order, this is going to be exciting.

TOMAS HLAVACEK: Thank you very much. So, good afternoon, my name is Tomas Hlavacek and I am here to try and draw attention towards a new open source project called Universal Looking Glass.

First thing, what a Looking Glass is. Of course, you all know that Looking Glass is a software or usually that publication for analyses of BGP announcement and visibility and to gain some insight in the approximate table into the remote network and running trouble shooting utilities like ping or traceroute. And there are a couple of looking ?? there are a couple of problems in Looking Glass concept.

For example, suspicion or constant suspicion towards clients which could be perceived as security threat or a challenge for support of multiple vendors and multiple BGP implementations which means a lot of different code. And there is a challenge for visual viability or templating, because Looking Glass is usually a part of some corporate websites and it has to be able to accommodate the rest of the sites, so it has to have same visual style and there is of course, input reliability problem, because basically with Looking Glass usually does is some screen scraping from routers or BGP implementations, which is not really reliable process. And of course, there is inherent configuration complexity in Looking Glass.

So, I wanted to tackle all these problems a bit. When designing Universal Looking Glass, and my approach to security was a lot of parameter checking, verbose logging or rate limiting.
I tried to implement support for multiple vendors from different levels and I have a lot of shared code. Now, I'm able to support bird, Cisco IOS, JUnos and I am working on IOS 6R. For templates I have got Python and [gen she] engine. For screen scraping I am using this, and of course a lot of testing for for improving reliability.

And there is a lot of autosensing when it's possible.

There are some notable features, or a couple of notable features for Universal Looking Glass. First, it's open source software, written in Python. It's simple CGI, it's low profile, so, a few dependences and few libraries. Universal Looking Glass is able to visualise BGP table using Graphviz library and it uses some decoration of text output and using HTML tables out of text tables which improves rehabilitation. And there is a WHOIS client and bindings for RIPE database. Now, I'm going to show awe few screen shots, especially screen shots of these last three features.

So the first one is BGP table visualisation, and you can see graphical interpretation of show BGP IPv4 comment etc. Or BIRD's ?? you can see AS paths for each received path, for one particle or prefix.

Another screen shot is the HTML table generated out of some text table and there are IP prefixes and AS numbers applicable so you can run a WHOIS client directly from this page and the WHOIS client looks like this, so it's just a window and ?? or pop?up window, and there is a binding link to RIPE database interface.

So, we have one notable deployment in NIX, on route servers, which are running BIRD, and you can go download and try it or you can take a release from lab.nic.cz page. That's it, thank you very much. If there are some questions, I would be glad to answer.

(Applause)

CHAIR: Thank you, questions?

AUDIENCE SPEAKER: Hi, Blake, with L33 Networks. Just a quick one. Have you talked to the rancid people at all about this and maybe merging their code. I know that the LG code over there has not been super well maintained but they do kind of kick it and thump it and patch it when it needs it, and it might be better to have these two things kind of try to follow the same path rather than having yet another ?? because everybody with rancid, well not everybody, most people that use rancid generally use that for their Looking Glass unless they have developed something else internally, so that would be maybe a good thing to take this on to the rancid mailing list and say look what I have got and have them have a look at it.

ERIK BAIS: Definitely. This started as a Looking Glass only for BIRD, and ?? so, the goal for Cisco and Juniper is some sort of additional features or it started as additional features, so, networks I haven't discussed it with anybody. I just tried another Looking Glass. I would be happy to join some mailing list or join some discussions on this.

CHAIR: Any other questions? Thank you very much.

(Applause)

I have an announcement. I am told to remind attendees ?? I will remind you and see if you go. There is a meet the RIPE NCC Executive Board BoF and breakfast tomorrow, the first number is an 8 in the time. So schedule yourselves accordingly, from 8 to 9 in a diplomat room of the Sussex restaurant, and there you can find all of the board members and the executive board of the RIPE NCC and you can pester them for the things you wish they had done for you that they have not done for you. But you would have to be awake and needing breakfast for you in order to do that. So it's a test. Excellent...

Next up we have Job Snijders, who has a tale of mystery and confusion, the way in which a Mac address can somehow impact IPv4 forwarding behaviour. This is very confusing and he will explain more.

JOB SNIJDERS: Hello, RIPE. Thank you for having me here in Dublin. What I want to accomplish in this presentation is where I share some operational experiences of curious perversions in our MPLS networks and by the end of it you should be taught a little bit about how hashing works, what it means, what kind of problems can exist in this MPLS world, and how you might solve them.

My name is Job Snijders, I work for Atrato IP networks, AS 5580. I have many hobbies, but most of them are Internet?related, to the point where I wake up, open my laptop and at some point go to bed again. And debugging the Internet is a very enjoyable thing.

This presentation is about why performance can be bad towards Mac addresses that start with a 6. And to understand why this is an issue, I have defined our problem statement.

In an MPLS network, the routers in the middle are not aware of the actual pay loads or packets they are transporting. This was done by design because we wanted some kind of magic technology to do anything over anything and just transport it regardless of what it actually is. And routers in the middle in MPLS networks have to guess what they are actually transporting, because MPLS should be kind of pay load agnostic and when you have to guess or assume, things can go wrong and we all know what assumptions actually are.

What is hashing? Hashing is where you take a fixed set of identifiers in, for instance, a packet, mangle that through an algorithm and you get some value between 1 and 16 and you can use that value to make decisions such as in low balancing decisions and the purpose of hashing is that with a constant input, you get constant output, so every time you enter values into a hashing algorithm, the same value should come out.

And in IP the following fields often are used: The destination Mac address, the source IP addresses, the destination IP address, the TCP port or maybe even identifiers further down in the packet.

Vendors might allow you to customise what fields you want to hash on.

Now, what happens if you apply an IPv6 hashing method to an IPv4 packet? You know that IPv6 addresses are slightly longer than IPv4 address so its hashing implementation is different. With IPv4, you'll be looking at a source IP address where you expect 32 bits. With IPv6 hashing, you obviously need to hash on 128 bits. And if you overlay common hashing methods in the IPv6 world on an IPv4 packet, there are some red fields that are here that will change with every packet that's being sent.

And the most interesting one is to TCP sequence number. The TCP sequence number is updated with every packet you send so every time you hash on that, you will get a different value out of your hashing algorithm.

And why would a vendor use IPv6 hashing on IPv4 packets? This is because vendors assume that if the first nibble of a pay load is a 4, it must be an IPv4 packet and not an ethernet frame. And if it's a 6, they'll assume that the pay load is an IPv6 packet. But it could also be an ethernet frame destined for a Mac address starting with a 6.

So, why are we even dealing with Mac addresses starting with a 4 or a 6? I call this the I triple E's greed, because what they used to do is they would sequentially assign Mac address blocks to vendors and Mac addresses starting with a 4 or a 6 would be like 20 years from now. But then in 2010, they decided that they would randomly allocate Mac addresses from the space available to them to cause collision and actually force people to pay 2,000 dollars for a Mac address block. So, they changed their policy in 2010 and then in 2011, vendors got blocks of Mac addresses starting with a 4 or a 6 and then in 2012, they started rolling off the factories, and it is in 2012 that I saw in our network that there was a surprising amount of Mac addresses with a 6. And all of the vendors have Mac addresses with a 6, Apple, Juniper, Cisco, like every manufacturer by now has a block with Mac addresses starting with a 6.

What am I building towards? What is the whole issue here? Packet misordering. If stuff arrives out of sequence, you get bad performance. It's harder to read that sentence because the letters are in the wrong order. It will cost you time to understand what is going on.

TCP implementations these days are aggressively tuned to deal with packet loss and if stuff arrives in the wrong order, they are not tuned or optimised to deal with that, so your performance will degrade to the point that you really question what am I buying this 10 G circuit from Alanta to Amsterdam and only get a few hundred kilobytes per second of performance.

If you consider a typical network with me on the left side and a friend on the right side, I have an ingress MPLS PE router, I will send my packets into that router. This router is connected with either a layer 2 aggregate of many 10g links or it's not a layer 2 aggregate, but it's multiple layer free router to router links, it doesn't matter, there are mum am connections between those two routers, then the router in the centre again has to make a decision on which link will I put these packets? You might notice that out of the free available paths, one path is just slightly longer, maybe a different patch cable was used and the cable is 10 metres longer or maybe it's from a different cable system from a different supplier. And stuff will arrive out of order if these packets are put on to ?? are not consistently put onto the same link. Because the packets put on this link will take way longer to arrive at the destination than packets on the short link. So, it's not the router that puts the packets in a different order. It will just shoot them out over paths of different length and that will cause misordering at the point of arrival.

If we look at an example pseudo?wire service from A to B, a packet arrives into a PE, the PE will add two labels. One label is related to where the packet has to go, and the other label is related to what service the packet belongs to, so which VPLS instance or which pseudo?wire.

Let's go through some example cases. Remote peering is highly popular these days. You get up pseudo?wire from wherever your network is based to an exchange and you can start peering with people even though you don't have a real presence in that metro. Most of the traffic going through remote peering these days actually is IPv4 packets containing TCP, although I have seen in my graphs that UDP is highly popular these days.

And what happens when people send their peering traffic into the pseudo?wire and the routers in the middle will mistakenly assume that there are multiple flows where actually there is only just one flow, namely the pseudo?wire and start spreading the traffic around over the multiple links that are available to them.

This is what happens if you are unlucky. Is this a real issue? When I looked in October 2012, almost 3 percent of the Mac addresses I saw on our AMS?IX phasing port were starting with a 6, and on DE?CIX links, it was slightly lower, but these numbers grow, because every line card you take out of production and you put a brand new line card into production. Chances are that that new line card starts with a 6.

Other example cases where you might encounter misordering. Customers buy VPLS or point?to?point circuits to connect offices, and enterprises always aim for the most optimised solution, so they'd be running like Windows file?sharing over very long distances to gain the most out of their network, and if you put misordering on top of that, you have a real problem, and the enterprise will complain to you why is stuff so slow, and the only thing they did wrong that was they bought a new line card starting with a 6. That was their mistake.

So, I urge you, your ops teams or your knock, check Mac address item on your 'to do' list when debugging stuff. If weird things happen, ask what the Mac address is because it might correlate to the behaviour in this talk.

So what solutions do we have? There is a thing called the pseudo?wire control word, it's an RFC that was published in February 2006, and the trick is, it will create a little bit of distance between the actual pay load and the labels. So, a router that is guessing what the pay load is and looks at the first nibble of the pay load, will actually encounter the control word instead of the destination Mac address, which will be in this area. Both ends ?? both routers have to support the control word feature, but routers in the middle don't have to support it.

You could also use control words to carry for instance a sequence number. The control word is supported by the majority of network vendors in the market today. Other options of doing better load balancing than just guessing what the pay load is, would be flow aware transport for pseudo?wires, in other words FATPOE or PW. A new flow label is introduced and it's at the innermost position, so you'd have the surface label would be 50 and the flow aware transport label would be 1, 2, 3, 4, 5. And an egress PE, so the router at the far end that has to chop off all the labels and actually put the packet into the relevant surface and send it to the customer, can discover that this label is being used.

Again, flow aware transport, your backbone, doesn't need to know about it. And there is a more modern version which is also applicable to IP transport instead of only pseudo?wires and it's called Entropy Labels, and with Entropy Labels you put so much hashing value or randomised value, what is the ingress PE, it feels it is important to decide that this stream of packets belongs to a single flow, it can be signalled in the label stack and it will signal with this reserve value, the 7, and that will indicate that whatever comes after that label is the Entropy Label and the routers in the middle can use those labels to hash and load balance accordingly, instead of guessing how they should load balance.

This solution does not require your back bone to know about this feature.

There are other solutions, quick or dirty, work around. A thing called Duck?Typing. In essence, the whole concept of a P router guessing what the pay load is, you could prevent the whole miss recording thing if it makes a better guess or if it almost never guesses wrongly, or at least does a constantly guess wrong. What you could do is check what you think what you think should be the length of the packet with the actual length of the packet. So, if a ?? if you receive a packet and the pay load starts with a 6, instead of blindly assuming that it's IPv6, do you an additional check to verify if this is an IPv6 packet, is the packet length making any sense? If the packet length makes sense, then I'll assume it's IPv6 and hash accordingly. If it doesn't make sense, I have to assume it's an ethernet frame and then hash as ethernet or don't hash at all.

We could signal in MPLS what the pay load is. So, instead of putting it in the label stack, you could create some kind of MPLS ether type which signals this MPLS packet is carrying an IP packet or this is carrying ethernet frames, and that could indicate to routers in the middle, how they should hash, because they know more about the pay load and you actually, you don't really need the separation between IPv4 and IPv6 Unicast, because if it is signalled that this packet contains IP, then you know you have to check the first nibble. If it's a 4, you hash as IPv4 and if it's a 6, you hash as IPv6.

This is a nonexistent solution, but it's more of a fault experiment, what you could do to make routers in the middle behave better.

Another option in theory would be to improve TCP and make it more robust against misordering. Where we don't deem this a plausible solution at all, because nobody will upgrade every TCP stack on this planet.

Let's look at available solutions today. Brocade expects to release 5.5 in the next few weeks and 5.5 will have a new feature which is essentially the Duck?Typing that we discussed earlier where it will do an additional check on the length fields to see if it's v4 or v6.

Juniper silently introduced that feature in 12.2 R 3, because I asked them many times, guys, do you have a work around for this issue? And they said no, but meanwhile they were shipping and OS that actually did a double check, so that was a pleasant surprise. And with Cisco, I'm unaware of what their current stand is with regards to a Duck?Typing solution.

The MPLS ether type, it doesn't exist today. There is no standard defining it. Also, the financial incentive for vendors is low, because it would mean that legacy equipment, or old equipment can be used in the field longer, and that's not of interest to them so. Why would you build a feature that actually makes your hardware last longer?

And TCP, I don't really expect any improvement in that space any time soon.

Let's go for some networks that we know of. AMS?IX is not suffering from the misordering today, because they know that their traffic across the platform always is ethernet frames, they are not an IP network, they are a big layer 2 network. So they can disable on Brocade the hashing feature that will incorrectly let routers in the middle presume what the pay load is, and that way they don't have an issue.

On Brocade the work?around for packet misordering is "No load balance speculate MPLS?IP."

For Atrato, we have a mix of ethernet frames across the IP packets across the network and what we did was we also had to disable the speculation features for IP and push back a BGP free core a little bit, so we wouldn't miss order packets in our network. So, every MPLS packet is just treated as ethernet, which is true in our network, but it's not the cleanest solution.

LINX is another big network which has to deal with Mac addresses starting with a 6. Their whole load balancing strategy is different. On the edges of the network, the ingress PE adds a little bit of of entropy to the packets and the routers in the middle don't presume to know anything about pay load at all. They only hash and balance based on labels. So if enough diversity in the label stack, you get nice load balancing but the routers in the middle are actually quite stupid.

Also, these Juniper MX routers, they set a control word, so if you would have packets flowing over a non?Juniper core, they wouldn't see the first nibble of packets.

Another interesting thing I read about with Mac addresses starting with a 6, is on the ASR 9,000, where a packet would arrive at the ASR, it would look at the first nibble, see there is a 6 and check the length of the packet, and if that wouldn't match, it would just discard the packet and think it's a corrupt packet because the length is not matching with the actual length. But they fixed that already. But, again, this is an example where you buy a new line card and you actually suffer because your lined card is starting with a 6.

We should keep in mind that this issue was documented in the best practices document 128 already in 2007, five years ago people ?? six years ?? six?and?a?little?bit years ago, people thought that if you assume about a 4 or a 6 in the pay load, there could be issues when Mac addresses are handed out with a 4 or a 6. The MPLS control word drafts already mentioned this problem in 2004, almost ten years ago people already signal that this could be a serious issue and that's why the control word feature was actually created.

So, if you did not implement control words or entropy labels or anything that helps load balancing in big networks, it kind of implies, in my eyes, that you want to see the world burn. Load balancing these days is of utmost importance. The Internet is growing, flows are getting bigger and bigger, and you need to be able to shift traffic around over multiple links in a sensible way.

This concludes my presentation regarding issues you might see with Mac address. Are there any questions regarding what I have told you.

CHAIR: You have stunned them into silence...

JOB SNIJDERS: Is that a good thing?

CHAIR: No, actually that's a good point. Are are there questions?

AUDIENCE SPEAKER: Hello. Sebastian Wiesinger, Norris Network. My question regarding this control word, is it LINX uses ?? hasn't got any problems because they use this control word? Do you know if it is enabled by default on the Juniper systems, because I couldn't find it for VPLS, I know it's enabled for default for layer 2?VPN services but for VPLS I couldn't find any reference to even having the possibility to set the control word so I was a bit confused about that. Do you know any specifics about that?

JOB SNIJDERS: I am unsure about the specifics either. But what I do know, if you look at just point to point circuits, which are commonly used, the PEs will negotiate with each other, whether they have control word capability and if yes, by default, every platform will use it, so in a lot of networks, this issue never arose because by default, the routers were doing the correct thing by setting a control word and hiding the first nibble of the destination Mac address in packets. But, the exact specifics, I don't know about.

CHAIR: Any other questions? Excellent. Thank you very much.

(Applause)

And next up we have Manish Karir who is of DHS. He will be speaking to us about IPv6 DarkNet analysis.

MANISH KARIR: Good afternoon everyone, I am the Programme Manager with DHS. The work conducted at the University of Michigan and the people listed were all working on this together, they are all part of the team. I am just presenting for the group here.

So I am going to be talking about Internet pollution. You probably seen some of these slides before, because we have talked about Internet pollution quite a bit in the NANOG setting and elsewhere as well. So, in general, when we talk about Internet pollution, we talk about background radiation stuff, junk that you see on the Internet in areas where there is supposed to be nothing, and traditionally we have thought of this as coming from a few interesting sources. Scanning, worm scanning is supposed to be a common way in which this pollution can get generated. Also, you can see a lot of this traffic from service attacks backscatter, proof IP addressees attacking a victim and the responses go off into some strange land. And if you build a DarkNet which is looking in the strange land where there isn't supposed to be anything and you can pick up a lot of interesting things.

Traditionally, we looked at Internet pollution, we thought about worm scanning, backscatter. But based on other work we have done in IPv4, we have a slightly different view of what Internet pollution is. It can be a whole lot of junk. Misconfigurations, very common, topology mapping scans, where people forgot when to stop scanning, software coding bugs, in one case we have actually found a case where somebody was essentially had a software bug, a byte [orbiting] problem which resulted in packets going off into a wrong place. Bad default settings. We have heard about the 1 /8 and 1.1.1.1 problem. The routing instability and even Internet censorship techniques can result in this.

Previously, we have talked about a lot of IPv4?based DarkNet studies. We have looked at maybe 20 different /8 network blocks, in each case we had the same methodology, we announced a large prefix, we collect traffic, we analyse and we publish results. There are also, in addition to these different /8 studies which were all over a short period of time. There are also longstanding network telescope studies you might have heard about. So Merit has a long?standing DarkNet study, as does CAIDA at UCSD.

What I want to talk about today is, IPv6 pollution, which has been studied too much in the past. The only previous work that we have seen on this was joint work with the APNIC, between APNIC and Sandia Labs, where they were announced the covering prefix and collected whatever traffic was unclaimed in the BGP routing table at their collectors.

And they found some small amounts of traffic and they noticed there isn't a whole lot, and that kind of makes sense because IPv6 is not used a whole lot either. So, we would expect the pollution to be minimal as well.

So, we looked at that study and we said, how could we scale this up? Clearly we want to maximise the size of our collection. What is a largest amount of address space we could advertise in we want to look at regional effects. We know from our IPv6 experience that Internet pollution varies. It depends from one point to another point. So we want to be able to study this early stage as we are transitions and more and more networks are becoming v6 enabled we want to know what's going on in terms of detecting possible misconfigurations, places where people might be making mistakes, instabilities, so we want a regional coverage, we want large spacious address coverage and we want to know differences between allocated and unused address space. If we monitoring different kinds of addressees, do we see different kinds of pollution?

This was our methodology. We picked the five /12s that were allocated to the all the RIRs, these are covering prefixes. We would go ahead and get permission to announce these for our study from the RIRs, determine the visibility of our announcement, we would determine ?? we would also probe to see whether there were any data plain effects, things like port blocking, also filters that might be in place, analyse the data and report the results to the community. Of course the important thing, make sure we don't break the Internet when we're doing the study.

First step, we went ahead and got LOAs from all the RIRs, which would allow us to announce the /12. We had to of course present these to our upstream providers because they of course are sensible and checking to make sure we don't start announcing /12s on a whim.

We started this experiment in November and we have recently gotten an extension from some of the RIRs to continue the study for a longer term study. The data that I'll be presenting today is going to be different subsections. In some cases it will be week?long subsets. In some cases for trending I'll be showing three months of data at a time. But it's all essentially since last November.

So what we're doing is, taking the /12 announcements, presenting the LOAs to our upstreams, which is Hurricane Electric and AT&T, routing pollution traffic to our collectors and doing the analysis. The one clarification here is, with RIPE, we had to reduce our /12 announcement to just be a non?covering /13 and /14 segment and we'll talk about whether that mattered or not.

First step, we needed to validate how visible was our announcement, did it propagate to the whole world, was it limited? So we go ahead and we look at the route views services and make sure that our prefixs are visible there and you also the RIPE monitors as well, we see about eight of the nine sites for route views saw our announcements and also nine of the 12 v6 monitors in RIPE saw the announcement.

We did see some diminished visibility for the RIPE prefixes in mid?January and we are not yet sure why, so maybe if some of you have some ideas, we can get some insight into what was going on there.

In order to validate data path connectivity, make sure we didn't break the Internet part of our study, what we did was we took a sample of 12,000 hosts, picks capable hosts, this was derived from the top end lists and we probed them before starting our announcements. Probed them after starting announcements just to make sure that all of of them were reachable and most ?? all of them were reachable. We noticed no significant impact of our announcement on the reachability of those sites.

In order to determine whether there was any port filtering that would affect the kind of traffic they were observing in our DarkNet, we found ?? we were able to obtain access to five distributed hosts in different regions of the world and we did N Maps coming inbound to our collectors to make sure that all traffic that could be routed to our collector was indeed reaching it, and once again we found no port filtering or blocking.

So, getting back to the key point. Did changing our RIPE announcement from a /12, a covering /12 to a non?covering /13 plus a /14 make any difference?

And it did. It made a huge difference in the volume of traffic we were collecting. On the left?hand side you are seeing traffic over two?day period. This was when the /12 announcement was made. And we see about 300 to 400 kilobits per second of traffic. On the right?hand side you see a chart where it's over a longer duration and you see almost negligible traffic. In fact, the traffic for the RIPE region that we did collect was so negligible that we actually do not present any data on it in the following graphs. They were just, you know, hundreds or a few thousands of packets a day.

So, for the other 12 /12 announcements that we were able to conduct, I am going to present some comparative charts of what kind of data do we see? What kind of transit do we find?

In each of these following slides you will see four charts, one for each region, AfriNIC, LACNIC, APNIC and ARIN. What we're talking about is traffic volumes here. Once again, you see APNIC with the largest traffic volume. About roughly 500 kilobits per second of traffic up and down a little bit. For ARIN, we see about 300 kilobits per second, but you see these interesting packet loss, very regular once a day peak of 1 megabit per second of DarkNet traffic. I'll talk about this towards the end of presentation, about classifying what exactly that was. And for AfriNIC and LACNIC we see similar rounds; LACNIC, 250 kilobits, and AfriNIC, a very small amount.

What protocol breakdown do we see when we look at these regions and analyse the data. Small amount of UDP and very small amount of ICMP. Here in IPv6, we find something totally different. We find lots of ICMP probes, lots of ICMP ping requests, v6 ping requests, a fair amount of UDP traffic, misrouted UDP traffic and almost negligible amount of PCP traffic.

Long?term trends and this is once again is over a three?month study from left to right. You can see a slight increase. This is just for one region. This is just for the ARIN /12 announcement. Towards the end you see a slight increase in the volume of data we are seeing. I think, once again, the sample size is small, we need to observe over six months, twelve months before we see any clearly visible trends.

In terms of destinations, what targets, what addressees are these packets going to in each of the regions? And for all, across all regions we see this consistency, which is 90% of the traffic is only going to about 100 unique destinations, or less. And where is it coming from? Once again, 90% of the traffic in each region comes from 1,000 unique sources or less.

The time to live value, I'll come back to this in a following slide. What is some of the top ports and the port analysis really is trying to get to, what is the traffic that composed of that we're seeing?

On the left?hand side, you are going to see charts for TCP and on the right?hand side they are charts for top UDP ports. UDP ports always very clear, lots of DNS traffic. On the TCP side of things, things vary a little bit but you see some backscatter, some misconfigured traffic, varying from DNS traffic to HTP to SMTP, in some cases even MTP, let's talk about ?? and it's fairly consistent across all regions; these are the sets of services people run, and when you are running a DarkNet scanner, you will undoubtedly catch some of this traffic from these different applications. So let's talk about some interesting case studies.

One thing we really wanted to look at was, are we seeing any worm activity or scanning in IPv6? And the answer is no. We look for large?scale scanning, using any protocol, and we didn't see any such activity. We see signs of limited sub?net scanning. For example, there is lots of ICMP?based probing and scanning going on in each of the regions. In one interesting case, for example, there was a sequential scan that we were able to recognise but it was over small sub?net, it wasn't a /0 or /12 indiscriminate scan, but sequentially within that small sub?net. We see also the probing that we would tend to expect, for example, there is a lot of IPv6 activity in topology discovery and in one case we saw single IP address, somewhere sourced out of Akamai space sending 2.5 million packets. To about 140 or so unique destinations, so we see a lot of pointed scanning but not indiscriminate scanning.

Would we see link local addressees? This was an interesting study because this would point out the potential of these IPv6?enabled networks to be susceptible to spoofed addresses coming out or leaking address space. We actually see about 800 unique link local addressees which we should not be seeing in our collectors at all. This is traffic sourced with these IP addressees.

In one case, for example, there is about 71 million ICMP packets, all with the same link local address.

NTP, BGP, we see a fair amount of NTP and BGP traffic as well in our three?month data set. NTP traffic, for example, was about 4.7 thousand unique sources, not just 4.7. But it was all batched together. We would see about even distribution from AT&T, Horizon wireless and H Cast and they all seem to be trying to get so the same single name server, which was in the IPv6 poor for the ntp.org. We see a small amount of BGP traffic, about only 330 unique sources but these were essentially routers which kept trying to reconnect or initiate a session with a target that was not there.

And we were pretty sure these were legitimate BGP ?? this was legitimate BGP traffic, because, on inspection, we actually saw the IP addressees belonged to loop?back names.

E?mail traffic. We saw a fair amount of SMPT traffic as well. All of it we could identify as e?mail serves that were attempting to distribute mail over IPv6. And these were, once again, thousands, hundreds of different e?mail servers and they were all, some of them were Google e?mail server, some were Comcast, depending on which region you were looking at.

Now, one of the biggest contributors to the whole data set was DNS traffic. So we really wanted to figure out, well, if we slice and dice it, why are we seeing so much DNS traffic and what does it mean? So we looked at the DNS traffic in detail in terms of requests, we see both requests and responses. And they are all valid requests and responses. In terms of the sources of where these requests were coming from, we see Hurricane Electric was the top of the list with about 55,000 unique sources, AT&T not far behind, 23,000 Edge Cast and all of these systems, large numbers of IPs within these ASes were sending DNS request packets.

Breaking it down by overall regional trends, we see lots of DNS queries coming from APNIC region, 176 ?? once again this is over a three?month window ?? 75 million from ARIN, and of course, like I have said before, very small quantities from the RIPE region.

Responses. We see similar trends, lots of responses from the APNIC region, about 450 million. Very large number from ARIN as well, and so on, and you can see the source distribution as well.

Some interesting things. We actually did see some interesting activity from blacklists, DNS blacklists, run by a single entity. And coming back to the periodic spikes that we mentioned in the traffic, we track these down a bit further and what we found is these were all responses from either ns.ripe.net or a handful of comcast.net resolvers and they were all going to a single destination address in the dock space, which was something IP address space, if anybody knows what this was for where it's going to or coming, to please find me in the break.

One of the things that we have really thought long and hard about in our study is, what is ?? why are we seeing so much traffic and what are the potential implications of our covering prefix announcements? One of the things that contributes be to a lot of what we're seeing is a lot of addressees we see belong to prefixes that are very close to prefixes that are in the routing table. For example, about 40 to 80% of all packets were within one heck character of a router prefix, that is if that one character was changed, those packets would not be seen by us, potentially pointing to misconfiguration, we're not sure, hard for us to be sure, but this is the kind of traffic we would pick up, because we're routing a covering prefix.

Once again, route instability. Route instability can also result in a fair amount of traffic being visible to us if they are packets in flight and the underlying route disappears, where do those packets go in, well they would be picked up by a covering route. Partial visibility can also have a significant impact, if some places know how to get to a route and there is actually asymmetric routing and the return path doesn't know how to get back, you can end up with traffic being collected by a DarkNet sensor.

So, to conclude: This was a significant large?scale study of IPv6 pollution traffic. The motivation behind this study really is to try to understand what potential issues are coming up as people are enabling IPv6 services in their networks, and the goal here really is, if there is some instability or misconfiguration we can identify, we can bring that up to the attention of the people that are running these networks and hopefully try to address that problem so that they can transition more easily.

We identified key contributors of traffic so far. And once again, pollution traffic is highly unpredictable. You never know what you're going to get but it's always very interesting.

We would like to continue our long?term study to figure out how the trends are evolving as more and more systems are becoming IPv6?enabled. We'd like to help the operational community by sharing what we're seeing in the traffic back through a portal so you can see what data was collected by our inquiries, or maybe something that's customised where we can show that this was address that ?? addressees or traffic that leaked out of your space. And finally, we would like to really reintroduce the RIPE prefixes into our study and better understand why we see so little traffic. We suspect it's largely because of visibility in the RIPE region that was restricted. And also, potentially the reduced size, because we do know that there is greater pollution traffic the closer you are to actually assigned addressees, and because we were so far away from assigned address space in the RIPE region, we probably saw very little traffic.

That's all I have. I'll take questions.

CHAIR: Thanks Manish. Are there any questions?

AUDIENCE SPEAKER: Thank you for an interesting presentation. You mentioned this clustering in observations or in the NTP, the BGP and the DNS. Do we have an explanation for this clustering? Just the way you measure, are you measuring points? Or is there some other reason?

MANISH KARIR: We suspect that, specially for the NTP servers that we saw traffic from, these were all coming from similar systems and we suspect these were on mobile systems that might have been configured to talk back to an NTP server and then the route to the NTP server disappears, or you know, so they are all configured to talk to this one person. They might have a default configuration and then they are put in a location where there is no IPv6 connectivity. So, you know ?? or incomplete routing table in that area so this traffic gets misrouted to the collector, so... yeah... there is a lot of misconfiguration and similar activity that goes on and we see this across different providers too, so maybe it's a vendor default configuration for NTP, I'm not quite sure yet.

AUDIENCE SPEAKER: Jen Linkova. A question, you mentioned link local type of incorrect source. Any other suspicious sources like examples net, ULA, something like that, did you look into it?

MANISH KARIR: No, no significant other sources like that.

AUDIENCE SPEAKER: It's actually interesting, because I checked what we are receiving like incorrect source and there is plenty of examples, which means people are reading the documentation applying the examples.

MANISH KARIR: It would be good to compare notes. Maybe it's something we missed the time window because this analysis was over a time window. The observations come and go as well. They are not always consistent. Sometimes people realise the mistake and they fix it just in time.

CHAIR: That seems extraordinarily unlikely.

AUDIENCE SPEAKER: You should think positive.

CHAIR: I am known for my relentless negativity.

AUDIENCE SPEAKER: Hello, I was wondering do you have a responding publication which I can read to get more insight into this work.

MANISH KARIR: This is currently under preparation, under submission, but we might be able to share something off line.

CHAIR: Excellent. Are there any other questions? I am certain that you would like to ask, we have about 14 more minutes worth of questions to ensure that you do not go to the break early. You see, thank you, Aaron Hughes is here to stand between you and your early coffee break.

AUDIENCE SPEAKER: Aaron Hughes. Do you want us to receive these /12 prefixes because it's not entirely clear if you are looking for to us actually permit the dark traffic to you or if we should continue to use sanity filters, maybe the RIPE region are actually do that, wouldn't that be great and blocking things like LE 48, GT12, whatever.

MANISH KARIR: Actually just to get it past our first hop announcement. I mean AT&T had to write a special script because their code for adding new announcements did not even consider the possibility of anyone ever doing a /12. We also had to talk to Hurricane Electric to make sure the announcement was accepted. With you that's as far as we got. We did not reach out to the broader community to make sure these were accepted, and if you know that, you know, they are sanity checking in place in your network, then please let's talk to see how we can ??

CHAIR: Awed awe you might want to consider senting out an e?mail to the operator list saying if you actually use sanity filters, consider permitting /12.

AUDIENCE SPEAKER: Hi, Suzanne from the RIPE NCC. And I just have a note from Daniel Karrenberg from the RIPE NCC, he is suggesting that privacy really good questions with announcing all IPv6 address space of the RIPE region be discussed at the Routing Working Group.

CHAIR: It is so noted. Excellent. Anything else? Manish, thank you very much.

(Applause)

And we are now on early, yet again, coffee break, please have sensible and useful and interesting conversations with each other and if you can't do that, have some coffee. We are back here at 16:00.

(Coffee break)