Archives

These are unedited transcripts and may contain errors.


Plenary session

14th of May, 2013, at 4p.m.:

CHAIR: Colleagues, ladies and gentlemen, we better start because we have a very dense session. And we have two presentations and the panel and they kind of form a logical story, so we will start with the first presentation by Paul Ebersman from Infoblox who will talk about DNS, whether it's attack or defence, I guess it's both. And because the session is very compressed, we will allow one question for Paul, then Marek from Internet Identity will talk about one specific abuse which is DNS ampification attacks which will bring us to the theme of our panel which is anti?spoofing and want to look at seven years of this and what happens and what needs to be done. So that is kind of the outline of the session we are going to have and having said that, I would like to invite Paul with his presentation. Please, Paul.

PAUL EBERSMAN: Thank you. So, DNS these days, we have got to the stage where people other than us actually know what DNS is, which is in many ways a feature, obviously we all know that there are many things that don't work very well if we have to remember IP addresses but one of the key things has happened, we are now considered important infrastructure, we are actually considered to be a financial risk if our DNS is not solid, so instead of basically being a cost centre and showing up suddenly as a line and operation report, if we lose DNS, you can wind up on the Wall Street Journal and New York Times and we now can get funding to start doing things we knew we should be doing for years. There are some obvious things that most of us are doing, obviously the easiest way to destroy someone's DNS is to get into their registrar account and taken over the time servers with your own if, not possibly taking over the name servers themselves or possibly if if you have bad policies or procedures letting people who shouldn't update the zone zone files. In most cases the attackers have gotten much more sophisticated in what they are doing over the last few years. We started seeing cache poisoning happening back in 2007, and even earlier because we had a lot of things in the DNS protocol that really weren't well?designed or where security wasn't what we were thinking about when DNS was originally done back in the '80s and '90s so there are flaws in some of the name serve implementations themselves but there were some flaws in the DNS protocol, most particularly a short message ID which is essentially the only way you have of knowing if the reply is for the query you sent out, there really is no other form of identification at all.

One of the other things we have discovered is, there are an awful lot of people out there who are running open resolvers with no restrictions. I have looked at the web page but we are talking in terms of hundreds of thousands of resolvers out there, many of them embedded in cheap little devices like CPEs that we have difficulty updating quickly and if your cache is poisoned there is a man in the middle attack for all of the users that have cache.

So one of the things that hit us rather early was the fact that the random number generator for BIND one of the larger implementations was severely flawed and so what was happening was, it became almost trivially easy to start guessing, I will skip past the math. If you really love it, you can look at the slides and get an idea but what it boiled down to is the odds of two people having the same birthday goes up much more quickly than we would think because of this math, we discovered that there really didn't need to be that many DNS replies before you hit a match for the DNS message ID even if they were truly random across the full 64 K. As you can see, 600 packets at 40 to 80 bytes for a query in a reply for one single record, it doesn't take very long with pipes and machines to start cranking that out. So one of the earliest was basically one where they took advantage of the fact that there was very little or no security with glue, so could you get a reply and include glue records had nothing do with what you asked for and most caches would happily dump it right into your cache and put TTLs in terms of weeks instead of minutes. And then you are screwed until you flush your cache or you actually restart the server.

We got a little more sophistication when people start realising that that wasn't necessarily the easiest way to do it, and so Kaminsky found all you had to do was use a random label for the first label for a legitimate name server, you didn't need to ?? in the previous example one of the parts of the attack was you had to run a name server that lied so you had to have altered code which meant that the bad guy actually had to learn about source code and compiling. And in this case all you do is put a random label to the legitimate name servers, you know that the recursive server won't have it, you get your flood of replies with the whole range of 5 to 600 message IDs before the domain comes back and you are in their cache and that cash has been poisoned and once you are there you can insert whatever you wish into that cache. It became obvious that we needed to do some more things beyond, because we couldn't redesign DNS, getting things through the IETF would take years but there were some things we started doing, instead of using port 53 we start using a full range of source ports because that is ?? a source port and a message ID both had to match so we have somewhat increased the level of difficulty of actually getting that collision with what looks like a legitimate reply. And it certainly increased the time it would take but the reality was that even, you know, four, five years ago when 10 meg pipe was a pretty fat pipe, five, ten hours and you could pretty much poison anyone's cache with those.

So, obviously, we can't afford to have cache poisoned, what do we do? Well, some of the stuff we have already done in most of the major name server implementations, we have put more randomness and we have fixed the poor quality random number and BIND for message IDs, we are doing checks on glue to see if it seems legitimate for that packet, several of the implementations also actually check to make sure that if they get in an authoritative answer and there is something in the cache already but it was only glue, the authoritative server wins as you would hope. The biggest thing we tried to do obviously was DNSSEC where the recursive server for you if it's a validating server, should know from the authoritative server whether that was actually the data you wanted or whether it had been altered in transit or sent completely from some random third party. And as we know, DNSSEC has been deployed everywhere.

Unfortunately, what has happened instead is, DNSSEC is not securing data because realistically speaking, DNSSEC is not truly useful until the until end client on that machine does its own validation and actually does something useful with different validation states and possibly fails to connect. So instead what we have done was, we have increased the fragility of our DNS and the overhead on our machines for very little positive security impact, at least currently. So that is the fun with caching servers.

Now on to the other side of authoritative rezone servers. Obviously, with recursive servers what you have is, you have in theory a known set of clients asking questions but you don't know what questions they will ask, they can ask for any domain name in the world. Authoritatives have a very different problem; you have a known set of answers that you are going to be asked about but you have no idea who on the Internet might ever ask and you have to answer anybody because that is your job. So, what has happened is, they have decided that attacking authoritative servers is certainly one way of getting you off the Internet. They can't subvert but they can at least give you denial of service. And unfortunately the trend in BotNets and malware these days has made it trivially easy to send tens of gigabytes of data down someone's pipe and there are very few places that you can go to where you can actually survive that kind of onslaught at any one single site.

And again, the bad guys are starting to get vaguely clever, it's not all script kiddies, they discovered that I can ask a 60 or 80 byte query and I can get back potentially 4 K of data if I ask for an authoritative server and I get ?? and if I am actually using NSEC3 I get bonus rounds because not only do I have that huge reply, but if I decide to ask for a non?existent label in your DNS, you have to give me proof of nonexistence which means your authoritative server has to do prime number calculation to the two banding records to prove that the record you have asked for is in between. So, I can force you to clog your own pipe and I can force you to make your server burn CPU cycles. So, once again of course, the argument is just allow zone transfer, stop being so paranoid but not everybody looks at it that way.

So, on the authoritative side, what do you do to sort of survive all of this? And obviously, you want to be able to harden what is available. You can try perimeter ? as I said on authoritative servers, in most cases you can't truly know about the best you can do is do anti?spoofing where you can say I am not going to allow questions from one port that is on the outside towards the Internet, with source addresses that are inside my own firewall. But odds are if you are doing those kinds of games anyway, split Horizon reviews, it really doesn't help much. If you are going to sort of build a better mousetrap, obviously one of the things you can do is grossly overbuild your hardware. If you are getting, you know, a thousand queries a second and you build an infrastructure that can do 100,000, that is probably going to survive many BotNet attacks, at least as far as the packet count that you have. It may not help with your upstream link, but it will at least help. You can try to do clusters or load balance servers but realistically speaking it's much easier to clog the pipe or blow that whole site out so while cluster and servers might give you a bit of survivability they are more of a performance enhancement not really a hardening or protection measure that is going to be useful to most folks.

What you can do is sort of present a less easily attacked target. One of the easiest, though certainly not the cheapest, is if you have Internet pipes at one or more of your sites, that can survive that kind of bandwidth use then it's simply a matter of can you survive the query rate as well. You can add more authoritative servers more NSes to a point, everybody should have two for robustness but you do start running into the problem where not all clients will deal gracefully with having and to the deal with five or six MSs in the reply and you also have to worry about the UDP packet exceeding 512 as we all know, EDSO in theory is necessary for DNSSEC is going to make all of that go away. The reality is that the number of broken middle wear boxes, security boxes and broken security stats that filter anything over 512 means that you pretty much have to fit into one 512 packet or less, and you can see that even in the route server replies. Anycast is kind of interesting because the bad guys deliberately spread themselves out so there is no particular OA or AS, no particular IP address range that is obvious as the source of the attack. That is really good for them in that it's very hard to use acelse or traditional router based defences against them but anycast rather than being a performance thing of being topologically thing, that you get a faster answer, has become a res I will yen see or robustness defence on your part because if they are spread out they will be attacking the topologically closest machine and if you have anycast across multiple sites you have narrowed how much they can actually fire at any one server and you get a lot of robustness that way.

HA is another, for some reason there is some kind of bug like they are doing again NSEC machine and your machine can't catch up, it may help but realistically speaking unless it's something that crashes your name server code, HA is good for robustness and resiliency in other ways but doesn't do much in terms of defending.

Now we are getting into the interesting or amusing ways that they are using DNS.

All of us have been begging our own database programmers and application programmers for years to not code IP addresses and put in fully qualified DNS labels into all of their code because that scales much better and doesn't require pushing code when you need to change things around. Unfortunately we haven't had a whole lot of luck with our guys. On the other hand the bad guys have picked up on all of this, they are using DNS in their malware because it may take days or weeks for that code to get out and infect enough machines. Once a particular IP address starts connecting with clients it takes at most a day or two days before most of the security folks start noticing so they get shut down quickly. With DNS if you keep going through and shifting your IP address like with Fastflux you can actually have much more resiliency for getting your clients to get your clients to connect to your commanding control and we would all love to be able to keep all of our users up, we till have people using XP we would love to be able to replace. We can't mandate what our users use. Malware guys can. They can force updates on their clients so they are running the most recent code they have. And the next talk will be talking about DNS ampification. But pretty much all of the modern malware and BotNet software out there now of less than two years, is using some kind of command and control structure with a call home mechanism and most of them are also actually using DNS to make that more resilient.

So one of the things we do have to worry about is dealing with all of that malware and the users on our network. Traditional things like any virus are certainly suggest want because unfortunately there is still malware out there that we have all known about but people fire up some machine that has been on a shelf for two years and it was infected then and no one knew. They are mutating so fast you really can't keep up. There are trends certainly with vendors pushing next generation firewalls and intrusion and signature based and they will cash certain things as well but they really don't help for the bring your iPad in from home, already infected because your kid downloaded some cute new screen saver. So one of the interesting trends has been DNS as a protection.

And response policy zones allows you much like reputation feeds for spam, essentially have a way with DNS, again using the fact that it's a robust resilient database, it changes well and scales well for inserting records so instead of getting updates once a day or every hour it's literally minutes to seconds because it has any of the latest information and you can do things based on IP addresses, everybody who uses these two IP addresses as name servers seems to be one of the bad guys networks, so I don't have to know any more what new domain names he has registered, if I notice that pattern I can say I am going to assume anybody using this name serve or DNS label is bad. Very fast and efficient, IXFR you can do incremental insertions into your database without a complete refresh and it protects the clients. As soon as they come on to your network, you intercept that first recursive query request, you know they are infected. And if you are logging that you have a chance to now do something like turn off a switch port or do various other things and get them right off your net.

So, that is the basics. I was told I had time for one question. Is there anybody?

CHAIR: Yes, thank you, thank you, Paul. We have time for one question for Paul.

AUDIENCE SPEAKER: Jim Reid, just some random guy off the street. I was a bit concerned at one or two of the comments you made earlier on about defences and how things like Anycast and bigger pipes are a good things. Well obviously after DNS, if you are providing reliable, stable services but in the context of these reflection attacks, these are actually very bad things because you are giving power tools to start ?? to make these attack being a very, very useful attack vector, so you have to take into consideration other factors. So you might want to use things like the route limiting patches are going to be talked about soon, through filters and stuff like that on your upstream providers so you can at least stop some of these bad packets from entering your name servers and then your name servers ampifying and spewing that crap over the Internet.

PAUL EBERSMAN: Thank you, I specifically didn't have those in the talk because that goes to the next talk when I usually do I talk about response rate limiting and BCP 38 filtering styles exactly for those reasons.

JIM REID: Thanks.
(Applause)

CHAIR: Now I'd like to invite Merike and she is posing a question whether you are part of the problem and if you have a question please save it until the end of the panel because it leads to a bigger discussion. Thank you.

MERIKE KAEO: Hi there, and so I am hoping that most of you in the room that are part of the problem, hopefully in another month won't be.

So, my slides will talk first about some statistics on DNS ampification attacks in the last year, some specific measurements that have been taken, how to go about closing open recursive resolvers if you find you have them, and what other basic network hygiene can help.

So I think most of us know what DNS amplification attacks are so basically, you know somebody compromises let's say a large hosting provider, maybe doing some kind of a brute force attack on word press environments and so has access to a whole bunch of hosts, a lot of bandwidth, they spoof IP packets, you know, if it's an open recursive resolver, go about getting the information it needs and some poor unsuspecting targets get a whole lunch of packet and it's a disaster.

So what are the growing trends? Well reflective DDoS attacks use IP addresses of legitimate users, combining spoofed addresses are legitimate protocol use makes mitigation extremely difficult, what do you block and where? And recent trends have been utilising DNS as an attack vector since it is fundamentally used Internet technology. So, it exploits very largely unmanaged open recursive resolvers and also large response profiles as Paul was just mentioning with large replies to some queries.

The latest trends utilise the resource of large hosting providers for out of bandwidth and you know DNS is not the only protocol, this is all happened with SNMP and amplification attacks can use other protocols as well.

How bad is the problem? This particular slide takes information from report that was just released by Prolexic and the key point is in red. Over the last year there is about increase of 200% in the number of DNS?type attacks. And why does the DNS amplification work so well? Victims cannot see the actual originator of the attack so there is lots of DNS packets from a wide variety of real servers. And you cannot really block them effectively. The DNS servers are answering seemingly normal requests. The originating ISPs aren't necessarily impacted nor do they really see anything that is necessarily out of the norm. And filtering attack traffic is really difficult in practice.

And why would people actually run open resolvers? There is a distinction that I am trying to make here and I don't want to always say you are running an open resolver, close it and I want to make the distinction between managed and unmanaged because there is some deliberate services such as Google, open DNS, DynDNS, where they do ensure reliability and stability, at least that is the hope. Many are not deliberate and these are the ones that really we need to be concerned about so if you have an unmanaged open recursive resolver scenario and you are not aware you are doing this that is when you need to take a look at should I be closing hem? And what needs to be done?

First of all, try and ensure that no unmanaged open recursive resolvers exist. This is actually a huge problem. And equipment vendors need to ship default as closed and also best current practices document should not show recursive as open. I actually did some searches to find some documentation on how to close them, it was amazing how many configurations I found that oh, yes, here is how you configure recursive resolver and left it at that. That is also part of the problem. We also need to get everybody participating and stopping the ability to spoof IP addresses, and everybody always talk about BGP 38, I like it as much as anybody and it relates to ISPs doing ingress filtering. As we will discuss, I personally believe that egress filtering also at the edges should be done. And equipment vendors need to have better defaults for helping alleviate spoofing. Also, there should be more data in where spoofing capable and who actually has open recursive resolvers. So there has been some recent studies done that helped determine open resolvers, the DNS meeting was just this past Sunday and Monday and Gerard, who runs the open recursive resolver project, was there as well, as well as Dwayne Russells who was part of the measurement factory. These are pointers to get more information.

The open resolver project is the more recent one.

So this slide just kind of shows when you get to the measurement factory, it took me a little while to figure out where all the data was so that is why you know, have that little cheat sheet there on the bottom so it's not obvious, when you go to the main page results and DNS server results and finally open resolvers and you will get to this page.

And what I did ?? what they do is since 2006, which was when there was a large DNS amplification attack that basically targeted the route servers, every couple of years this becomes a forefront again of the news and somebody starts doing something and people forget and you get something else again, so with the measurement factory you actually have a list of open recursive resolvers according to their measurements, every single day from 2006. And so for the fun of it I took May 7th and I just, I don't know, it was just something I wanted to do, just kind of see well, is there a difference between regions and I did do an eyeball comparison to some of Gerard's results and the regions do match up, if you look at some of the slices, I didn't want to call out specifically which telecom had this many, you can see that from the website but for me I am saying every region has its issues and you really should take a look at that.

This one is what you see when you go to open resolver project that.org and when you look on top the detailed history and break down, that is where you can see what actually you know, other results from the measurements.

I graphed them, OK, and I am not going to go through all the different graphs because the main point is the large ?? is the top most line here, which means that, which basically shows you have got over 30 million open recursive resolvers, 30 million. I doubt that all of those are managed. And there is ?? I mean if you go look at the website and you look at the numbers, right, there is no correlation, one of the reasons I started doing the graph, is there some kind of a trend and I am going the trend, it's a huge problem. And so really, as far as I am concerned, you know what we need to do is just look at, do we have these open resolvers in our environment and then the next step, is what do I do to close them? So sheer a couple of pointers, the first one is BCP 140 which basically was written a long time ago and says here is how to prevent use of recursive name in reflector attacks and there is a pointer for BIND and team CYMRU how to close them and. Response rate limiting is also something that you can do. Look at it, some people really like it. I have talked to a few people who are not in favour of it but I have heard good things so I would take a look at it.

What other basic network hygiene helps? Ingress filtering. And ingress filtering does not equal uRPF as we will probably say again later. You can also just use simple filters. Make sure you look at whether or not you need transit route filters, peering filters in an IX environment, also next?hop and don't redistribute connected routes into your IGP.

I have this configuration slide up, just if you are looking at, an ISP and let's say a home customer and quite frankly this is a con ration that I had, this is mine except I don't use the documentation addresses and yes, I do have v6. So all I am cog is say, my provider gave me this, yes a /48, long story ask me later. All I say is anything that is sourced with that prefix or you know that allocation, that is the only traffic that I am going to send out because whatever my test network is if I ?? I don't want to be part of the problem of leaking stupid traffic out. My provider will have the opposite filter, they will have an ingress filter so BCP 38. You could also do something like that using route filters. So it doesn't have to be uRPF so. Please don't be part of the problem, OK? First off, please do test on your environment whether you have unmanaged open resolvers and there is two pointers here with two different places where you can actually measure putting your net block and see what the result is. And ensure you are helping to stop spoof traffic as close as to the source as possible, again this doesn't mean uRPF, you can use packet or route filters and I have some additional references. And time for the panel.

CHAIR: Thank you very much. And I think this presentation is a nice way to our panel and with that I would like to invite the panelists to the stage, the panel we called it seven years of anti?spoofing and why seven years because seven years ago there was an anti?spoofing task force and want to reflect back and look forward as well.

ANDREI ROBACHEVSKY: So this is a short introduction to the panel, actually. Seven years of anti?spoofing. Before going to why we have this anti?spoofing panel I want to go back in history, actually, why seven years.

We in the seven years ago, 2006, 2008, it was this RIPE anti?spoofing task force and as a result there were two reports, a practical how to with Juniper, Cisco, rules, how to configure router, to comply with BCP 38 /84. And there was a business case. So, being good citizen, good network hygiene there was also a document RIPE 432, which make a strong case for that. And I can't tell it made any difference, story ?? especially where I come from we have seen that people are patching their name servers, they are ?? factors in the network, something like Merike told, response rate limiting. What kind of situation are we now, why is this panel. This is into the question spoof. Traffic is still a problem and we have seen a recent example of that, the Spamhaus attack of 300 gigabit per second, which is ?? it made all the news, it was really quite appeal about it.

So, but in the meantime landscapes changed in this past seven years, but started attack factors in 2006 and 2013. These are all questions we try to touch upon in the panel. But also 300,000 ?? 300 gigabit per second, that is a lot. But in 2007, it means something different than nowadays. So, severity has changed but also available solutions, we have new complex routers and network boxes introduced in our network. Do they become a new attack factor or part of our solution? And do the solutions of 2006 still solve our problems in 2013.

So, to conclude, so it all boils down what kind of concrete actions we can take, as individual as a community, to improve on this situation we are now currently in. And so that is why we are here and having this panel.

So, it's hard to get some data on spoof traffic. We had Manish in previous session telling about the DarkNet, come to that later. These are some DDoS statistics. Our DDoS is a serious problem and we ?? we just took our network security report of last year, and on about 61 pages of the whole document of 100 DDoS is being mentioned, it's not really a scientific statistic; just counting pages. But what I think interesting; I am not sure people can see it in the back of the room is that about 75% ?? sorry, 57% do deploy BCP 38 /84 according to this study. To secure their networks. And more specifically, to defend their DNS infrastructure, than we call it uRPF and again it's not the same, it's a specific way, it's a method to implement BCP 38, to protect their DNS infrastructure, it's 37%. But again, this is only solving part of the problem because if you have to protect your network, at the DNS sites then you are already too late. You only can filter part of your network.

This is the DarkNet measurements so this is kind of back scatter, Manish already presented some results. This is from a study from IBM ex force, it's a DarkNet of 25,000 addresses. And the scale is not that important; it's the trend I want to show you. It's from 2006 to 2012. And there are ?? there is quite some variety here. But the long?term trend is there is increase in spoof traffic being observed in the DartNets and some final data point before I go to the panel. This is a spoofer project. It's a project by MIT and people at CAIDA. It's measuring spoofable address blocks but all volunteer of course, otherwise we can't measure it so the Net engineers are probably people who are interested in some networking, downloading this test, running it from their network and then it's being reported on their website so these are really high numbers. It's about 85% unspoofable, that might be very optimistic because these are all net engineers running these tests and they probably are knowledgeable persons. So we think it's a little bit biased. There will be a major rejuvenation of this project, so CAIDA and the people at CAIDA MIT ask us to till, look at this website at the end of this month, beginning of next month, they will make a redesign of their tools and they ask you to run and participate in this project, download this software and run their software and come back a month later to see what kind of measurements they have.

I want to go to the panelists, I am still on time I hope. So, we have six people here on stage. Merike Kaeo, you just saw her presentation, and I don't have pictures so figure out what which face belongs to which name. Marek has title of security evangelist at Internet Identity and is a company information ?? partnership and prior to joining IID she was responsible for the overall data and security services at ISC, and founded served as the chief network security architect as ?? security. So David Freedman in the middle. He is active for 14 years in the industry. And works for European operator Claranet. And he is also regular participant and contributor to the many industry forms, including RIPE, he is well?known for that and IETF.

Erik is the guy over that. Erik is applied security research engineer, researcher at the CSO office at Verisign and focuses on inter?domain router security and information sharing frameworks and Verisign is has an interesting perspective on this problem because they run.com TLD and as Paul stated they are kind of the ?? in these kind of DNS attacks. Hellel Schut is a crime investigator for the crime unit, part of Dutch national police and his unit come bats form of cybercrime, if you have any interesting questions some back to that later. Get in touch with him. He knows a lot. Hessel is involved in many of the investigations conducted by the high tech crime unit and prior to joining to this unit in 2007 he worked as network engineer for Dutch public company so he is really techie.

Marek Moskal is an engineer working for Cisco and after obtaining his networking degree at telecom Britain I can't in France and he held a number of positions within his home country Poland as well as in European region and helping in design of large SP networks, and currently he co?operates with polish as network architecture and within Cisco as a field advisor for the next generation, IPNGN team.

Nick Hilliard, well, he is also one of our sponsors, of course, but besides that, in his free time, he has a day job, Nick is a pioneer of the Irish Internet and specialised in IP backbone design implementation since 1995. And he has been involved with INEX since its inception as a chief technical officer he directs in INEX infrastructure design and engineering and it is the key technical contact for the members and on behalf of INEX he liaises with international organisations like RIPE, IETF, Euro?IX and is actively involved in developing European IP addressing policy and creating standards for ISP worldwide.

I will hand over the mic to Andrei, he will moderate this session.

ANDREI ROBACHEVSKY: So I will moderate this session and the first question I would like to ask, it might look receipt or I can but I have to ask it anyway: Is the problem of spoofing serious, severe enough to take an action, what do you think? And also, a question related to that: Looking backwards, like seven years ago, there was some urgency for solving that problem, has the landscape somehow changed over those seven years.

HESSEL SCHUT: Well, I can only tell from what we have seen in our investigations so that is a bit of a limited view. But in 2007, we hardly saw any IP spoofing happening in our investigations, and we have done a lot, a couple of large DDoS attacks in the past and well, especially towards this year, we only see IP spoofing actually and there is no other form of attacks going on any more. So, the landscape seems to be changing and very favourably towards IP spoofing.

DAVID FREEDMAN: I have to agree, operational data that we have seen in the last year?and?a?half would tend to indicates most of the attacks involve spoof traffic and now pretty much when we have got engineers looking at flow data and its UDP you know what it's going to be.



SPEAKER: I must agree with what everyone is say, I think there is a broadening of the field too, so I think wave huge problem of spoofing clearly but I think we also wind up in a situation where people are starting to employ different, different actors with different capabilities and they will represent themselves or with different attacks patterns, talking about spoofing is right on and we shouldn't diverge too much from that when we are addressing this particular problem but we ought to be aware there is a bigger problem, the state attacks and stuff like that still around.

ANDREI ROBACHEVSKY: Thank you for those answers, but the Internet is also growing, right, has Bano said, 300 gig in seven years ago would clog the whole Internet and now we are fighting this as ?? if you normalise this do you think it's still on the rise or is it diminishing, what is your experience?

DAVE FREEDMAN: I think our experience is it's absolutely on the rise. It is infection point where we have hit and I don't think we are done with it yet what I have seen.

ANDREI ROBACHEVSKY: If you look at kind of historical backwards, the first solution do ingress filtering was proposed in 2000 and BCP 38 and is pretty empty document but it use as code word to bring under the umbrella all the solutions that we can use for fighting spoofing and many solutions were developed and fine tuned and still Spamhaus experience tense if not hundreds gigs of attack caused by DNS amplification attack. So what do you think, as a stumbling blocks why those solutions now getting through.

MERIKE KAEO: So my experiences is actually quite global since I have been doing workshops globally and very much lass proponent of doing ingress filtering and what I find is there is multiple problems. One is that sometimes people really just don't care and there are some geographic regions where they need regularly requirements to actually do something which I am hoping will not be the case.

There is also issues where sometimes there have been vendor problems, I call them bugs especially if they do get fixed but if somebody has been hit by it then they are very much averse to trying to configure something that uRPF or some filters because the primary thing they need to do is get packets through. So a lot of people are also afraid to configure anything because they don't want to adversely hit their networks.

MAREK MOSKAL: I will add a bit of positive comment because I think the situation has somewhat improved. Ten years ago we didn't have too many tools to fight with those attacks. There were problems with some of the features avail on routers and other networking equipment. But thanks to feedback from the communities right now practically every service provider type edge router has some kind of filtering available, usually it's input, Anycast RPF and owe features that fall under BCP 38 umbrella. I think also education improved a lot. I mean, ten years ago it was hard to find a person who could do something and then everybody could go out buy ISP essential books that you probably already know and thanks to RIPE and other organisations there is much more knowledge that is being shared. So this is a positive side, although coming back to what you said earlier, there is also more and more people who know about it and who are on the dark side so naturally even though people are able to clamp down on what is happening, the percentage that is coming through is still essentially quite high. So I think we have tools, we have some of those stumbling blocks too but overall we are moving forward into eliminating this particular area of problem.

ANDREI ROBACHEVSKY: When you said tools it means there are multiple solution, there is no one solution that can be universally deployed? Is it correct? And related question: Do you think it's straightforward for operators, tools to use in what circumstances?

MAREK MOSKAL: So it's not just a simple disable DDoS attack or spoofing attack, nobody that can configure routers, it's much more complex problem but we start to get a lot of pieces put together on the technical side and we also start to get some of the political organisational thing going on top of that where we have service providers working together on tackling this problem.

DAVID FREEDMAN: I have to agree with Marek, tools are very important and useful, I can remember a number of years ago when we first noticed the prevalence of spoofing based attacks, I was running a piece of equipment where the vendor implemented a tool to have a look to see whether there were any flows passing through the box coming from an IP or looking for I think it was called source IP tracking and could you use those to track down the ingress port they were coming from and perhaps identify the peer. From there we ended up with our girlfriend uRPF. For all the failings in that it's actually a very good tool and it may not be entirely applicable to all of the situations that you probably need it for. So, and I think this has been said many times: Since the real, the principle behind BCP 38 is filtering, you shouldn't really detract from the fact that the tools to filter existed in the first place and the tools we build on top of that are there to make your lives easier but if they are not available to you, there are some people that go and look and say oh, it isn't available and therefore we don't it. There are other means and you shouldn't lose /SAOEUFT the goal which is to perform some kind of filtering, some filtering is better than none.

ANDREI ROBACHEVSKY: Just while you are answering those questions, I think for operators also it's important to understand what kind of risks they are incurring when deploying those tools. Do you think this knowledge exists in the community, such documentation is available?

DAVID FREEDMAN: It sounds simple, doesn't it, if a customer announce as set of prefixes to you and off prefix list perhaps you can correct, either manually or using RPF to enforce the, from the forwarding /PEFRPS, the packets that are entering through that interface, kind of a source from the prefixes that you accept from the customer, seems very straightforward. You will find situations where that doesn't apply. And then the risk, I guess depends on the customer and the deployment. So some networks are very risk?averse, they won't take risks with anything, we see that in other areas, also in IPv6 deployments, some we saw today. I think that it's very interesting because the risk versus the value of what you get when you deploy this across the customer base, I would assume that as customers increase in value, you as an operator develop the testing techniques to go off and approve this in a lab before on the live network so I am not quite understanding why the risk is so important really when the value seems for the community as a whole to be far greater.

MERIKE KAEO: A comment to your question about are some of these guidelines available, I came across something that I had forgotten about which is an Internet draft on best operational experience on BCP 84 and so it was started, it was version 3 and he never went any further but I took a quick look at the document and this is actually pretty good and people should be aware of it because it's still relevant to actually understand BCP 84 is the one where it looks at ingress filtering in a multi?homed environment and so he had written a pretty lengthy document on his personal experience with it and I do think some of these documents and best current practices exist, they just don't exist in one place. And so, my hope is that, somehow, they will be eventually, I am not quite sure yet.

ANDREI ROBACHEVSKY: Thank you. So, well, Benno showed some slides, the data might be biased and too optimistic, but still, 80%, that is not bad. Still, we have those 20% and that kind of looks like a wide open door, right so? So my question is if we look at the edge specifically and the edge is growing as the Internet grows, is it actually possible to make it watertight, what is the best strategy, are there some points, types of networks, perhaps, where our efforts to promote those anti?spoofing techniques and technologies would be more effective, what do you think?

HESSEL SCHUT: What we see is most of the attacks originate from least boxes at hosting providers at ?? that was from the time that we saw unspoofed attacks and I assume that that is still the case, so I think that is where most of the focus should be, also. The normal SDSL line especially for script, too much of a hindrance to do any serious spoofing, I think.

NICK HILLIARD: I think if we try to deal with the edge, the absolute customer edge, the customer hand off point, it's too big a problem. And that if we try to deal with it closer up to the AS path boundaries, it's going to end up being a much easier problem to deal with because you are going to end up with a combination of two things: You are going to end up with far fewer constriction points which makes it inherently easier to do anti?spoofing filtering and more clueful people and you are probably going to be dealing with equipment which is able to handle anti?spoofing requirements. So if you are going to go out to the customer edge, all of your edge equipment has to be able to handle Unicast RPF filtering or set up your provisioning system to deploy all of these filters out to each of your customers and that can be huge problem to deal with and it strikes me if we could bring that up level, the level of anti?spoofing won't be as good but it will still be pretty good. And we can do it on a per AS level instead of per customer level. And this is going to bring the problem from ?? it's not a perfect solution, but it is going to reduce the problem to being something that is going to be a whole pile more manageable.

MERIKE KAEO: I actually think that the problem needs to be more than just from the ISP side so I will differ just a little bit because I do think for CP devices if they had a 19 filter where someone had to enter in the address block that was assigned obviously that is not going to be the case for everybody but it's a small subset but that small subset will also make a big difference, I think

NICK HILLIARD: Going back to one of the points I think Dave made earlier, he suggested the notion that well, OK, for AS path boundaries we already have prefix filters, and on some types of routers you can use the same type of prefix lists for prefix filtering as you can for ingress traffic filtering. And I think there is a major win here, actually, because it means you have to configure less on your devices which makes the provisioning problem much easier. It's not possible on all systems but it certainly is possible on some of them.

DAVID FREEDMAN: I just wanted to say a few words about the type of edge that we are addressing here. And this is evolved, it really has. I mean, when I think of the edge now that worries me, I think of as Hessel said, the hosting and system side, more specifically cloud computing. It's quite trivial to spin up a few virtual machine on a cloud provider and if that allows you to spoof you could automatic the whole process including shutting down the machines afterwards, it's quite scary to think about. The edge is also reflections as Marek says through CPE, there are quite a lot of CPE out there if you fire, act as open resolvers and knowing ?? getting a list of these together, knowing which ones to fire packets at and what produces the best return, on investment I shall say, in an attack, combined with being able to launch a number of attack sources, quite effectively from the cloud, it's just a disaster waiting to happen. And I can't believe that we are leaving ourselves open to this.

MERIKE KAEO: For some statistics, at the DNS organise meeting Gerard was presenting at the same time Oliver Goodmanson was presenting and they were both doing similar testing and their numbers matched, they found open 30 million open recursive resolvers and it turned out both of them found over 40% of what they think might be CPE devices that also would be open for spoofing.

SPEAKER: I want to jump in real quick, in regards the CPE devices, there are multiple different actors out there with different capabilities and the type of actor that might wind you have doing easy reflector attacks is not the same as may be in possession of well provisioned web servers in launching attacks from those and I am not saying that anything that has been said is wrong but when we are talking about addressing risks and remediations, it helps us to be clear on the types of threats that are out there when we look after the CPEs, they were not to be giving us as much packet love, it doesn't mean we shouldn't do it, we should be conscious that there are multiple people with multiple capacities and agendas out there and they are going to act differently.

MERIKE KAEO: Don't forget cell phones, so what I found out some phones if you tether them become open resolvers and also do not forget IPv6.

DAVID FREEDMAN: Very good point.

MAREK MOSKAL: Since the original question was about the edge, it depends on the definition of the edge. I think over seven past years we have pretty much tackled the problem of conventional edge. When you have subscribers dial in DSL cable and also business subscribers, and we are pretty good with IPv4, right? But right now we are moving on to completely new types of edges like data centres, clouds, and IPv6 and so on. So I am afraid that we can pat ourselves on the back, we haven't finished one job and already moved to the new edge or the next generation it is or whatever it is those days.

ANDREI ROBACHEVSKY: I think it brings us also to the first question, the landscape has pretty much changed. So my question is: Is there a way out? Right. And well, actually, Benno had a backup slide. So we actually wanted to ?? we invited Daniel Karrenberg as well, he was one of the co?chairs of the task force that happened seven years ago. Unfortunately he couldn't come because he has important appointment today and that is the message he wanted do convey. It doesn't give a solution but it's a warning as well. So my question would be, and I would like to have more discussion and then we will move this discussion to the audience as well, is what collective action, we as individual networks and as a community, need to take to remediate this problem to get a way out?

DAVID FREEDMAN: I just also want to add, based on what you have on this slide, the word 'regulation' here is quite important because it could end up being a repeat of the debacle that was RPKI and if this is seen to be a problem, somebody will come in and regulate in the vacuum, so there is an onus on us to do something here. So, solutions and moving forward, something positive from this. Well, education is one thing. Capability is another. I think what is missing are practically examples as Merike said as in, perhaps even going further than recipes, looking at individual cases of customer set?ups, business cases, real documentation. There is more supportive than just add this command to all of your interfaces. Some real life examples of some real cases that people can go back and use, a real document that is credible that you can put in front of your management teams, we should be doing this.

MERIKE KAEO: And before I was pretty negative about some people don't care, there are many people that actually do care, they may have so much to do that they may need to have somebody point out that hey, you know something, you have a network that has a large number of hosts or you know the capability of you know, being this spoofing attack launch factor so can you do something about it so I am a huge proponent of getting measurements and real data as to which networks or where can spoofing be done. Same as like what has been done with the open resolver project where you have ASs and you go to the AS and say, hey in your environment look what you have, because that will at least give them information that maybe they don't have time to do the testing.

ANDREI ROBACHEVSKY: I think that is very good point because many networks are not aware that they are part of the problem, right.

ERIC OSTERWEIL: I I want to add ??

MAREK MOSKAL: I was going to add what David said, education is the single most important thing, unless we educate the people out there they will just never know the better. So it's not only networking people, it's also applications people who right now work with cloud, software applications that run over the network but that is pretty well?known. Also one thing to keep in mind: We are running in higher and higher speeds, we have 100 gig, we are going to have terabit speeds soon, remember that it takes about 3, 4 years to design and to do certain functionality. So also, if people agree, if the community agrees on certain features that should be available in products, it's the latest to start now to ask manufacturers and vendors to implement those features and, it could take up to 3, 4 years until you have all equipment that can really do that. So, also, keep in mind that not everything is possible to be done within a year or just six months, it's not just a quick fix.

ERIC OSTERWEIL: Just a comment, I think one of the things we could be thinking about which is sort of a different layer is allow the information sharing work that is starting to go on, when people are under attack or indicators or incidents worth sharing there is a growing set of people, communities trying to address how can I share incident information and indicator information that would help someone, if not set up you know general filtering filter on something very topical, right now or about to happen to you and in particular the FIs have banded together and this has been very useful for them and there has been some existing and coming back up to speed standards and like in the IETF, IOTF being dusted off and some things called taxi and sticks that is being run out of a group and there is, they are trying to find their place and trying to find whether they fit but I think it's in addition everything we are talking about provision network and we work on the information sharing frameworks are nominal looking a little less directly at the symptoms and what we could do more at a high level for the cold.

ANDREI ROBACHEVSKY: It's interesting you touch on collaboration, that is the Internet together and that is why we are here basically, not just to look at and listen to nice panels and nice panels. So, one of the things we discussed the technical building blocks and capacity building but from the social perspective I like one analogy, like washing hands, it has double effect, it it protects you from bacteria and germs and you are not spreading things and that second part is, well it takes time before that culture changes, right? In anti?spoofing activity is the same, if you deploy anti?spoofing or do anti?spoofing measures, you are actually not securing your own network, your resource can be attacked but you are contributing to the good of the Internet. So, my question: What do you think what kind of measures we can take to change this cultural, instigate this cultural change so people don't think, just in immediate business cases, right, and how that protects me but kind of reciprocal actions?

DAVID FREEDMAN: Just in a commercial world it's either regulation or reputation. You see the ?? it's either compliance or reputation. If you are known to be a network and that is unhygienic, then people will, over time, consider how they do business with you. If you are known to be a network who have compliance requirements for various products that you sell, then I think you will find it very difficult to operate an unhygienic network as some of these clients requirements mature and you have to ensure that you put more and more controls in place to deal with this.

ANDREI ROBACHEVSKY: Do you think, like well, compliance and regulation that is one thing. But peer pressure is another, like name and shame strategy, do you think it will work in this environment?

MERIKE KAEO: I will make a comment because we were just having this discussion at lunch. I personally think that naming and shaming will just really, it's so derogatory that I would rather that people say yes, we are making people aware that this is in your environment, and you know, I will make the joke and I will probably being videotaped and I am going to hate it, women manipulate so we will say it in a nice way. But you know, but do I think that actually raising awareness in terms of where spoofing is, you know, where the capability is, I think that is necessary. I do think that measurement tools are needed and I think that would really help a lot. And we have been dealing with this problem for ten years. I have given workshops where every second word is filter until 30 people say please stop I am going to filter and I look at their configurations and I don't know if they have, but hopefully...

The problem is, it's big and I was kind of joking that maybe somebody just fund a couple of people to go around and look at everybody's network and make sure they clean it up because sometimes it's the hand holding that is absolutely necessary.

ANDREI ROBACHEVSKY: We have only 40,000 ASs right, so it's doable, I guess. I think we have warmed up the audience well enough and I would like to open the mics, the lines to the mics, and let also the panel to respond to comments and questions. Joe first.

AUDIENCE SPEAKER: With kind of Daniel seven years ago it was kind of heartbreaking to find out it was seven years ago really, the document appeals to people's business interest in order to get them to do something, as Benno said this hasn't really been all that effective and one of the problems, fundamental problems, you find the cost and benefit are not aligned, I have to incur a cost in reconfiguring my network so that you are saved from getting the attack. And even though this is reciprocal if you do it, no one actually moves on that, people are not thattal truistic, so I have the biggest sympathy for Daniel but I think that is never going to take off. So what happens if you don't do anything? Eventually, someone will step in and regulate when the pain level gets to certain level and from what I hear it's getting close, right? If you want to avoid that, because sometimes regulators don't have as broad a perspective as you would wish they do, then you have to do something. I work for ISC and many, many years ago, in the good old days, used to ship with no restrictions or which interface is good, answer questions to, particularly when acting as a resolver, right, the default was if you asked me I will answer, it doesn't matter which direction you are coming from. We took a first stab the changing the default and got basically flamed down. Took one step back and went to the IETF and wrote BCP, that says open resolvers in ?? are bad. That made me lost quite some hair on my head. Took longer than anyone would wish it would take but in the end it was approved and with that document in hand we changed the default in BIND. It's no longer the case that a default installation of bind will answer to any interface so time to mitigate within our action through specific act. In this case, I know that Cisco, for instance, takes forever, it took, I don't know how many years to enable cluster routing by default. Much longer than it would be desirable. Please look at the change ?? changing the defaults for these things. Don't disallow, there may be legitimate for someone to allow addresses coming in through interfaces that don't carry that traffic on the way down, but most of these things are open, are not intended; it's just that the default behaviour is such a way and nobody bothers to check it or change it so if the default becomes restrictive and then you write a little how to, if you really want to do this then this is how you operate that. This has an immense effect without very little effort.

ANDREI ROBACHEVSKY: You want to flip it over so people need to build capacity to deconfigure and anti?spoofing, then configure.

JOAO DAMAS: If you want to chance it, by all means, here is the command to change it, but the problem just like the problem was with open resolvers is that the people use whatever comes out of the box and if it runs it runs and they don't look at it again. And you can start educating, publishing, going around giving all the talks you want, no one will move a finger to change that, so please consider changing the defaults. Since Cisco was mentioned.

MAREK MOSKAL: Changing defaults is a hard thing to do. Technically it's the limp less but logically hardest thing to do because you break the scripts speaking for IOSs. This is moving forward. For example if you look at IOS 6 R when you configure BGP peer it will not announce or accept any routes unless you specifically configure it. In IOS accept any route without any filtering. There is also work done on IOS where you could move a single comment, look down the router could reconfigure everything for a very good specifically secure way. And these are things that happen over time and I think there is a bit of a progress. And other way to make the progress if you request the certain things from vendors and service providers and that is one example I can affidavit commercially enforcing the best practices. If a customer wants to buy from service provider he can always ask for ask for check box, do you adhere to BCP 38, do your percent and ask how many use the same BCP? So you know, it may not work initially, it's like saving water, oh, I am just one person, there is 100s of people here so I mean nothing. Not true. If there is enough people, enough most go in then things will start to happen and those altruistic behaviour can be changed and move forward a bit.

ANDREI ROBACHEVSKY: What I would like to do now, I see many people at the mics. There is a back light. I don't know who came first. And then I will let the panel respond. So unless direct questions, I will let just people go and I would like to start with that mic.

AUDIENCE SPEAKER: Dominic from rrbone, on regarding the hint from Dave on the taking your hand and configuration, all this things, I would like to point out that it's quite easy to figure out if your customers are used for a attacks by just analysing flow data and this is a thing we should take or should consider when giving hints on the configuration as configuration alone doesn't make it work and my ?? in my eyes we should also point out to further analyse all the flow data you are collecting anyway, nowadays, to detect those, any UDP servers which is used for spoofing and we discovered ourselves that our flow analysers wasn't as good as we thought because we are running on a route instance and we have Windows DNS ? on our network and we were basically attacking ourselves because where the open Windows DNS resolvers ask our route instance for arbitrary records and we were just wondering why there is so much traffic coming along and our flow detection didn't see it because there were smaller flows which didn't match the pattern and also it was internal traffic so we basically trusted the traffic and so you already ?? you should always keep an eye on your internal traffic because even if it's in your network only, it doesn't mean it's trusted by default.

ANDREI ROBACHEVSKY: Thanks.

AUDIENCE SPEAKER: Joe Provo, ITA Sfotware by Google. Just an observation and request of the community at large, you made mention happily of the C sale CAIDA spoofer project. I was at event earlier this year where some people should have known better were on a stage and saying gosh I wish there was some easy downloady?pointy?clicky?spoof test and they should have been aware of this already, the fact that while there is source code awful us would happily compile for our use it is easily downloadable for Windows, OSX, even for Linux, just want to under score there are people perhaps in the community that are not aware that it is that easy and that in addition to testing our own networks, which is probably what is getting the 80% on these statistics we should be testing our, your home provider networks, when you are having to do your parents' IT, test their networks and not just apply peer pressure or name and shame to providers who are allowing spoofing but apply support pressure because that will hit their wallets and they will wish to dress things. If you open a support ticket with them saying hey you are allowing spoofing, stop this, if we could outsource that a little bit it can get better.

DAVID FREEDMAN: Hey you are regulating my AUP, can you stop this.

AUDIENCE SPEAKER: Jan Zorz from the Internet Society. I really love seeing this different points of view in this panel and in the audience on the mics but then at the end agreeing with each other that we have a problem that needs to be solved. And I would suggest to take this as a first topic into the best current operational practices effort that we are trying to start around the region because this is, this is a good example of the document that should not differ between the regions, it's all the same all over the world globally, so I would like to encourage you to join the effort and start something. Thank you.

ANDREI ROBACHEVSKY: Thanks. Another round if you don't mind and then I will give you some air time, please.



ANAND BUDDHDEV: Daniel Karrenberg asks ?? says I would like to come back to what Hessel Schut said, much of the bandwidth in recent attacks comes from rented and unspoofed sources. My question: Are we concentrating on a marginal problem here if we concentrate everything on spoofing?

ANDREI ROBACHEVSKY: Well, I think that deserves an answer, right.

HESSEL SCHUT: I think I am being misquoted because that was in the past, that was 2009 that we last saw in attacks that we investigated that we had unspoofed sources. Nowadays, we actually only see spoofed sources and we have a hard time finding the source of those.

ERIC OSTERWEIL: We see a lot of both and I will claim they are very different problems and they are different actors and I think the size of spoof reflector attacks that we see definitely warrants attention, I would definitely not want to be on the receiving end that have packet load at home or anywhere else, booter services and you can get stablel connections, and we should conflate these things because remediating one or taking away the order doesn't necessarily mean we are safe, separate acts and separate problems and a couple of fires at the same time.

AUDIENCE SPEAKER: Marco Hogewoning, RIPE NCC. Actually, Olaf gave an excellent talk yesterday about diffusion and technology lifecycles and how innovation spreads and this is one of those clear cases, we hit 80%, we are looking at in terms of spoofing finding that lost 20% and the unfortunate thing is I think the 80% is in the room and the 20% is not, and the 20% is not watching us and is probably not going to listen. And to gear to what is on the screen, we need to address this problem before somebody addresses it by regulation, and the unfortunate case is if we can't fix the 20%, some regulation will try and fix it. Not even sure whether that is effective. So in terms of outreach and capacity?building and somebody mentioned have people go around, pay them. Would you really ?? do you really think you will be invited by the people, the last 20% and that is always the case, actually the work Olaf mentioned yesterday from Rogers, if you look at the history in all kinds of fields in diffusion, bait agents is usually not the best way to do it, you need peer pressure and that leads to naming and shaming. So as a question to the panel is, this might be, well this is bigger than RIPE, after all we are just a regional entity, we are not global and I do think this is a global problem. What would be the venue to reach out to those 20% or is there actually a venue to reach out of the 20%, how do we get and Rogers defiance that as late majority even ?? how do we get the leggard to join us before somebody indeed pops up and says we are going to regulate this, let's see if we can push them, which in the end might even ?? I am not confident that regulation will solve the last 20%. So, that two questions to the panel is where do you think we ?? because we can sit here in the room and of course, here in this room we can think about the solutions and we can write the best current practices, but in the end we have to get that best current practice implemented by the 20% and the 20% is unfortunately probably not here. If they are here, they are pretty stubborn and ignoring everything and I am not sure we can ever convince them to do this.

MERIKE KAEO: I am actually wondering, I mean I think hosting providers are also a large part of the problem at times and so do they actually attend these communities or is that one of the outreaches that could be done? Right? And maybe looking at are there specific communities that we can find where we see that hey, you know, a lot of spoofing is coming from over there, who are they? Can they be categorised and as you say maybe then do the outreach. And it's not realistic to travel and the world and try and convince everybody, absolutely not. There could be communities that we may be able to outreach to and the thing is to look at who are they.

Marco: In terms of hosting maybe we should reach out to the TLDs because they are a good connection to the holding communities.

ANDREI ROBACHEVSKY: Another, mic.

AUDIENCE SPEAKER: Aleksandr Saroyan from Russian Federation. Some time ago there was a suggestion to try to install spoofing test to RIPE Atlas probes and as far as I remember it was rejected by Atlas team but now if we discuss, ponders this question again. And the question to the guy at the panel who had real power: Is Dutch crime centre somehow successful in finding real sources of such attacks because any attack of a real person behind it and only you, in the crime centre have such powers, do you have any successful investigation and punishment? Thanks.

HESSEL SCHUT: Well, it's very hard to find a source of spoof attacks, in a technical way I don't recall that we have ever been successful in that. But most attacks have a reason, so we still fight the criminals, often, because they leave other traces than the traffic that they generate.

AUDIENCE SPEAKER: So you are working on the process or on the result?

HESSEL SCHUT: Yes, I would love to get input from people in the audience on what we can do to find the sources of these spoof attacks. But for now, we are also, well we don't know actually how to trace those back to the origins.

AUDIENCE SPEAKER: There is a way to going back to Daniel's point, and Daniel's point ?? I came up to the mic thinking I would be, this BotNet rent out company and I am actually very happy that you deployed BCP 38 because now I can up the price of my BotNet by factor ten because you just need ten Bots more in order to have the same level of attack because now you don't have reflection; you just use direct addresses, so to speak, use native addresses. As long as those BotNets are around, we will still suffer from distributed denial of services, by killing ?? by deploying BCP 38, which I think is very important, it is basic hygiene, but by doing that, you are not losing the problem of DDoS, let's not kid ourselves. And I think that is an important part of the homework as well, make sure you understand your flows in your network and if at all possible, be able to have stuff in your network so that you can isolate the sources of bad traffic and shut them down. That is, I think, a completely different ball game which is much more expensive than one of configuration, but it's the next step in the arms race. I think that ?? I can't speak for Daniel but I think that is the sort of point that I think he was making when saying let's ?? are we kidding ourselves looking at only this part of the angle.

ANDREI ROBACHEVSKY: Let me clarify your message because it is I think that despite that, we cannot be 100% secure and protected, we should make an effort? It's like human life, I mean we will never get rid of bacteria and viruses, will live in this world but doesn't mean that we should just surrender.

AUDIENCE SPEAKER: I think it's basic hygiene and indeed, it is so easily explained how this basic hygiene works, that it's almost ridiculous if people do not deploy it, at least from the regulator view, so to speak, that is very easy message: If you have a network and addresses originate from your network that are not yours, that is silly; you have control over that. That is very easy to explain. But that next step where flows originate from your network that are supposed to originate from your network, how do you cope with that? It's the next step of the arms race and if we are talking about these sort of problems, we shouldn't ignore that that is the next piece. And as you just said, we don't see that traffic any more; we only see spoofed addresses, well, if you don't have spoofed address any more, it will go back to the original type of traffics. I would be able to bet on that.

ANDREI ROBACHEVSKY: If you are building queues at one microphone you are lost.

GERT DORING: I want to make a couple of different comments but I try to keep short. One of the thing that was suggested that we should get customers to ask their ISPs for please implement BCP 38. I think how well that worked, works, we have seen with customers asking for IPv6, so I don't think that is ever going to work because at most the customer will have an urge to have not BCP 38 so they can play tricks so I have seen customers ask us to turn it off because they assumed it would damage their traffic. We didn't turn it off but anyway... I don't think we can educate the unwashed masses to ask for that in large enough numbers to call call centre or their salespeople. Anyway. The main thing I want to comment is actually on ?? not on the spoof sources but on the reflector boxes. The problem with spoof sources and reflector attacks is that it's very hard to find the sources because the reflector boxes helps them hiding. So, if we are back to unspoofed sources, I know who to whack, and that will be an improvement. So the attacks won't go away and I can find the culprits and the ISPs that connect them. Right now, I think the first steps that will actually help tune down the amount of DDoS traffic we see would be to fix the DDoS resolvers because that is the lowest hanging fruit, for the attackers and for us we can find the open resolvers, we cannot find the spoofed sources but we can find the open resolver so we should do that and rate limit it or whatever is opportune. At the same time, we should really install ways to backtrack traffic, that is if somebody is done using my authoritative DNS service to play a reflection attack, I want to be able to see where the spoof packets are coming from, so nigh NetFlow needs to give me the Net flow, it's coming over DE?CIX with a source map, there is work for the vendors to do, right now as far as I understand nobody can do NetFlow with properly source address so if spoof is coming in over exchange point with 300 peers I have no way to see who is sending this to me.

AUDIENCE SPEAKER: SFlow.

GERT DORING: Which platforms? OK, Brocade can do it, Cisco can't, Juniper can't. All these platforms can do it if you tell me I am a happy camper. Everything I have found some Cisco boxes claim to have support and look it up in the up cache and not the packs so this is completely rotten. We need to work on the backtracking capabilities to find these flows and find them in time where it's sort of like, they are still dialled in so we can actually find the user in question.

ANDREI ROBACHEVSKY: I think it also goes back to Erik's comment that data exchange and coordination.

DAVID FREEDMAN: Collaboration is very important, it reminds me of a situation where there is attack coming over exchange because you don't have the tell em tree, what you can do is if it's a big exchange in northern Europe a lot of them have sFlow that they make available to customers and you look for the graphs and spikes and you are sending us a million packets per second, check your NetFlow and and the response is what is NetFlow?

GERT DORING: You can do that if the peak is big enough to be visible on the exchange but if it's 100 megabits which multiplied by 40 is a pretty good attack. It's hard to see that on the exchange.

DAVID FREEDMAN: If you are looking for sources being amped, yes, if you are on the end of an attack.

ERIC OSTERWEIL: What Olaf said, that was sort of the point I was driving off the information sharing. I mean, if there was a way to express something to you that you wanted to hear in a framework in which could you ingest it really easily and automated framework that did as much as the work as possible, you are about to be on the receiving end of something that I observed in my network, here is something that might help you, so for people haven't looked in that before I just say the community would really love you to guys to say these are the things we'd like you to be able to express and that has been really helpful to us.

MERIKE KAEO: One last comment, I want to echo what Erik is say, I think information sharing on collaboration globally is something that does need to happen and I am very much in favour of this.

ANDREI ROBACHEVSKY: That is cool. Actually, I would like to make a comment as well if I may. The thing is, like in, I am coming back to this analogy of viruses and bacteria, actually we will lead with bacteria all the time but it makes us stronger so I think we can use this as an opportunity and make this community stronger by for instance improving on data sharing collaboration. So what I would like to do, we have a tremendous discussion that is great, but we are ?? we are running out of time so I would like to close the microphones, those at the mics OK. And I give to this microphone.

AUDIENCE SPEAKER: Robert Kisteleki, from RIPE NCC speaking for Atlas in response to the question of should Atlas do spoofed stuff or not? Let me quote Daniel Karrenberg verbatim: "Not under my watch." If there is clear consensus from the community we should do it of course we can look into it. Even then we need explicit consent from the host who is actually hosting the probe. So there are lots of caveats about it but we need to hear you that you want this and then we can look into it.

AUDIENCE SPEAKER: Yes.

AUDIENCE SPEAKER: Peter Koch, DE?NIC. I am going to suggest the headline for the occasional journalist in the room, which is: Panel at the RIPE meeting awaiting regulation to happen. And the subject series, I am a bit confused. On the one hand I hear, well, this is an 80/20 thing. We can't reach the 20% and all we can do is push this out to the customer somehow. On the other hand, we hear, well if we don't do anything the regulator will come and do something. So obviously the regulator has the magic stick and is in possession of magic stick which the community does not have. Which confuses me. I hear a very restricted view of what the opposite of regulation is, but usually the opposite of regulation is industry self regulation. All I have heard about this to date is name and shame, and I think there is a lack of fantasy here and I'd like to hear something about that from the panel.

ANDREI ROBACHEVSKY: We thought we were rounding this up, Peter, actually. But thank you for the question.

DAVID FREEDMAN: I was just going to say I I thought the opposite of regulation was deregulation. I think that this is a very well and good but it's probably fair to say that with ten years, 13 years later and nothing has happened, I am not sure the self regulation thing is working as well as it should. And we are missing an important piece and that magic stick you are talking about is penalties for noncompliance, it's regulators these days have teeth and they will use them. That is all I had to say.

MERIKE KAEO: I am in a conundrum here, I hear this 80%, which when I have talked to Robert Beverley, he said some of the algorithms and that you will aren't quite there. Do we know what the percentage; I beg to differ that we don't really know. And I do also think that every couple of years when these attacks become more prevalent, people take a look and some people do something about it and some don't and most don't and everything goes as normal. Two years later there is a big attack, some attention, some people do; some don't, right, and so it will be interesting to see with this particular large scale attack, I know that there are huge efforts ongoing right now to actually help education, see where the spoofing occurs and where there is open resolvers and I am just hoping that everybody is becoming a little bit more cognisant and I want to give it another six months to see whether or not self regulation is getting better.

DAVID FREEDMAN: I see this flipping between route hijacking spoofing and the route hijacking is important and spoofing is important, it's focussed to stay on one thing and have a positive and do something that leaves a legacy. So yeah when you take away something from spoofing I am sure a couple of months later there is going to be another real hijacking incident and people sitting there without filters that would have been impacted by it and they will focus on doing the real hijacking stuff. It's important to note that these two problems won't go away, so when you strategise, you need to do it for both of them.

AUDIENCE SPEAKER: Yes. As we are running out of time, I love this discussion. The only thing is that I think it's one thing clear from especially the last answer, it's urgency, we need to do something now. To continue this discussion in the context of RIPE means the next possible venue is close to six, seven months in Athens. So, RIPE NCC ?? speaking on behalf the RIPE NCC, the coordination centre, and this popped up in earlier discussions, should RIPE do something, should RIPE NCC play a role? I have got a wonderful meeting room in Amsterdam with about 30 seats, I have got a web access system. Would people be interested if I ?? and I am not talking about restarting a task force or even filing a Working Group, but just as an ad hoc solution in an urgent matter if, in the next few weeks, we pull off a meeting multi?stakeholder in Amsterdam with people who are interested to discuss this further and see what can be done and whether RIPE or ?? which venue should take action in which terms. I mean, if naming and shaming is the solution, what would be the perfect entity to do that if we need a technical solution, where should we do that in in the IETF or somewhere else? No set agenda and no ?? all possible outcomes but just to continue the momentum we now have instead of waiting for another six months so if people are interested find me or one of my external relations colleagues and we will see if we can pull it together.

DAVID FREEDMAN: Thank you, the offer is appreciated, it needs to be good coffee, though.

PATRIK FALSTROM: Discussing these issues. I also would like to ?? Peter, move a little bit forward here and talk a bit about what should be done. For example, if it is the case that we are nervous about regulation, we are nervous about some magic stick but known can really say what it is. Is the big problem here that the parties that actually do ingress filtering do not want to depeer the ones that don't? Is it an economical issue or are you asking regulators to actually force parties to depeer? What is it people are waiting for. And if I am a little bit pointy being an application layperson, I think on the application lay we did have problems with open S MT forwarders and people could send e?mail and spam and we have blacklists and if people don't fix their stuff they don't get an e?mail, this is very easy. So how come this kind of clean?up sort of works an application layer and not on the IP layer, and I also would like to see more fantasy of what kind of stick potentially regulators might use and ask how come we cannot use the same kind of stick ourselves in multi?stakeholder as many people talk about.

DAVID FREEDMAN: I am surprised there have been been no civil cases for damages by one peer against another, I am surprised there haven't been any public civil cases where this is played out, two peers in the same jurisdiction.

ANDREI ROBACHEVSKY: One thing I wanted to comment on the regulators issue, I think what we are looking for, we are taking action not because we are afraid of regulation but because we realise that a serious problem and regulation is important aspect but I won't be really fixated on that particular aspect. That is not the driving force for that, right, for that challenge.

BENNO OVEREINDER: I think three weeks ago Anita was announcing something like BCP 38 compliance, are you looking into it European and information security agency, and they do it for European Community or what is called European Commission. So, they are already thinking about it. Because of all the fuss, all the news about being, traffic being spoofed. So it's something to be aware of. And maybe to be proactive.

DAVID FREEDMAN: They are going to have to do both this and RPKI then.

ANDREI ROBACHEVSKY: Two people.

Erik: Question to the panel: Was it actually investigated if the actual tier one providers are actually doing ingress filtering besides prefix filtering on their interfaces? Specifically for ones they actually provide BGP transit? And if not, why not?

NICK HILLIARD: This is quite an interesting point because the one common factor that all of these 20% of people, I think the 20% is actually much larger, probably 40, 50%, but the one common thing ?? the one common thing that all of these spoof enabled networks have in common is that they all have upstream providers, and if there is some way of getting through to upstream providers that the spoof traffic that is coming from their down streams is actually harmful and that they should hit them with a clue bat if this happens and then they would be doing the community a huge favour.

AUDIENCE SPEAKER: Because if there is one entity that actually knows which prefixes you should receive traffic from it's the transit provider because they should do prefix filtering anyway on BGP what they allow so why not do an ingress filter on the same thing? Or maybe I am just too simplistic here.

DAVID FREEDMAN: The key word there is "should". We don't know, we don't have that level of data from that close to call the network.

NICK HILLIARD: I think in some cases it's quite feasible to do it if you have stub ASs where you have multitypically connected ASs downstream of single AS it becomes much more difficult because you don't necessarily know which traffic you are actually going to expect on the ingress interface. So, it could be that you would get legitimate traffic from some sort of part?time peer of a downstream AS, it might happen sometimes, it might not happen other times, it might not be a sort of a prefix that registered for, for upstream transit through that transit provider so it's quite difficult to tell. You had a you had a things will break definitely

NICK HILLIARD: For stub ASs it's very simple.

DAVID FREEDMAN: I don't think it's assured that the forwarding and control plains on the Internet are actually aligned. Where we see routing data flow, that is for ?? that is to signal packets to be routed in a particular direction, OK, in a, say call it a downstream direction. We don't know what happens in the other direction because the network wasn't built for those two to be aligned. So, whatever happens in forwarding, it's a problem that really we need to solve independently.

ANDREI ROBACHEVSKY: I would like to give the last chance to comment.

ANAND BUDDHDEV: Hi, this is Anand from the RIPE NCC. John Curran says via chat: Is this a place where documenting a best current operational practice would help? If nothing else it would provide a common starting point for regulators to consider for their actions.

DAVID FREEDMAN: I can see Jan smiling and put his thumb up.

ANDREI ROBACHEVSKY: Thank you the panel and also the audience, we need to continue this dialogue, as Marco suggested we can do this in the meeting, set up a mailing list and continue this discussion, so some action has to happen. Before closing this I would like to have a little poll and ask people that do anti?spoofing to raise their hands. OK, I am not going to count that, but it's quite a number. And I would like to ask people that would like to consider who are not doing anti?spoofing but will consider doing it after this panel, raise their hands.

DAVID FREEDMAN: Somebody started it. One, good. Yes.

ANDREI ROBACHEVSKY: OK. Well, thank you very much again.

AUDIENCE SPEAKER: Just for reference, whoever doesn't run a network and for who this question does not apply, raise their hands because that sets the baseline. So, you know, the difference between those is the set you have to worry about.

ANDREI ROBACHEVSKY: Actually we can create a critical mass from this community, you guys are doing how the good looks like and this is a good platform to move forward. Thank you very much again and I think Benno has an announcement.

BENNO OVEREINDER: Yes, thank you, so from the RIPE NCC, again, me the RIPE NCC board, Executive Board, tomorrow morning Todd mentioned something with an 8, well it goes on until 9 o'clock in the morning, so try to catch them in the diplomat room. So I guess diplomat room, of course, rate the talks and the panel and do your duty, as a participant.

(Applause)