These are unedited transcripts and may contain errors.
MAT Working Group session
16 May 2013 4 p.m.
CHAIR: Hello everybody and welcome to the MAT Working Group, measurement analysis and tools Working Group at RIPE 66. We will commence.
So, we have got an agenda today which has several good talks about current measurement activities both at the RIPE NCC and beyond, we have got some introductions to some work that is being done at the IETF on network mapping, broadband measurement and mapping. Vesna and Massimo are talking about some tools that are being integrated into the RIPE Stat and Atlas in the future.
So, this is the welcome. I am glad to thank the Jabber scribe and stenographer for their help. We minder to everyone when you are talking at the microphone please state your name and affiliation for the webcast.
The minutes from the last meeting are posted I think in mid?December, I don't think there are any comments on the mailing list about them. Any objections to approving those minutes?
Okay. I'm seeing none, we'll consider those approved. And any revisions to the agenda to propose?
All right. I guess then we'll proceed. I think our first speaker is our own co?chair, Christian Kaufman, to talk about some new work he is getting started.
CHRISTIAN KAUFMANN: Whatever hat you normally associate with me, none of them is correct for that presentation.
So, apparently I have way too much time. So, I decided to do a little bit of master thesis as one does, so, I signed up for a master of science in advanced networking at the open university in the UK, and one of the things you have to do, you have to find a topic. So, as I'm not just not interested in cool aspects in mobile network security and in which order you want to tweak them, and I as I have no clue about them I thought let's take something out of my life, which is peering, whatever, inter?connection related, and which is hopefully actually relevant to a couple of people; that I'm not just writing a thesis, nobody cares, and then you know get saved on a hard disc and backed up every month.
I had a couple can of ideas myself. Then I had a conversation with Martin Levy ?? is he in the room ?? which kind of then inspired me to the more or less correct version of this, which is an analysis of the Internet connection density in IPv6 compared to IPv4.
So what does it actually mean? Well, there are a lot of things which we blame IPv6 for, one of them is that it is slower, different, it doesn't work, all that kind of stuff. Part of the latency and speed is certainly related to MTU tunnels, hardware fall backs, that kind of stuff. At least from my perspective, there is also a lack of peering density in IPv6. That's at least what I come across in my daily work as working for Akamai. That people really started to have IPv6 and the backbones, all that kind of stuff is fine and then they start to set up one or two sessions, then when it runs, fine, you can ping its other side, you have an IPv6 trace they are cool, people kind of stop. At least some of them. What leads to the point that you have connectivity but it is actually not in the same density, so instead of having 10, /#20RBGS inter?connections over multiple Linx and /SEPB backup and redundancy, some of them set it up on a couple of Linx. So I guess so far that wasn't a big problem. Or it might not be a big problem for the next year, because there is not so much traffic and we are actually happy enough if you get a connection at all.
But, it actually, in my opinion, leads to the point that we will not use it or adapt to it as fast as possible because it is actually a little bit slower. So, knowing that also from an Akamai perspective where we try to get as many IPv6 sessions as we have before, the first part is easy and then you have to chase every single network for all the other sessions, which then takes a lot of time.
But that brought me to the point, is that actually a problem? Or is that all fine, you know we have enough connectivity, there is not much traffic anyway so why bother? Or is that something whether speed makes a difference and we should rather fix it now than later?
So I thought that's a good topic. Let's take that. Apparently the university had the same idea, well if they understood actually what I did at all, and then they approved it.
Now comes the interesting point. How to actually measure and research that. Well, normally when we do network latency measurements we do that with ping and traceroutes at least until Monday when we were told that's not so good from Randy. So the point is how do I actually make a lot of pings, traceroutes and measurements to common destinations on v4 on one side and IPv6 at the same time and see if the path is parallel and if the latency and speed is the same. Well, guess what? RIPE Atlas. So, there is the opportunity or the feature that you can actually see which probes are IPv6, v4 and then make measurements just from these probes which basically dual stack, and then I believe that's at least my theory so far, if you make these measurements, that you then can see the difference can pass them into big big log files and then see how that goes.
There is an another point if you actually figure out that the speed is different, the question is still why? So, from my theories, if you look at RIS data and route views, to basically actually see if all the networks have the same amount of sessions, probably there is a different amount of AS hops in between, or you actually see that Internet of random IX on the left side the traceroute goes over the right side for IPv6 and kind of tram bones in a worse case over the Atlantic, in a better case at least just in Europe, you then can see that via traceroutes, probably have a look in the peering DB where you see the various IP ranges of the IXes and then openfully mix that all together. Analyse it, and figure then out if the theory that IPv6 has less connectivity density is actually true.
Hopefully we can ?? well, I can then quantify the difference. And probably even ask the networks why they stopped somewhere in between and if my theory is actually true.
With a little bit of luck I am then also can write a thesis which would at least help with the university part and that is now my question to you: Report back to you at the next MAT session or, well it actually runs a year than the sessions afterwards.
Now, the question is that actually something which is A) interesting or do you actually believe that this is an issue or you know, have you always set up the same amount of IPv6 sessions they have the same path anyway, it's the same speed, you don't care anyway or is that something that you would be interested in to see what the difference is and how it looks like?
AUDIENCE SPEAKER: Yes, yes, and yes.
CHRISTIAN KAUFMANN: Thanks, thanks, thanks.
AUDIENCE SPEAKER: I have said it to you privately already. For the sake of the audience, we have seen this in our upstream networks that they have ?? have no IPv6 on all their routers. Some of the routers have no IPv6 while the others have good quality and good peering, so some of the peering points actually are reachable only over v4 which makes packets move over different paths and that yields latency differences, which has not been a real problem yet. But I found it interesting enough to investigate myself to I think this is useful.
AUDIENCE SPEAKER: I would like to add to what Gen said: Yes, yes, yes, please and it would be interesting to see the progression of this in time if possible to see if there is any improvement going forward.
CHRISTIAN KAUFMANN: Okay. Sounds good. So the main part of the analysis will be done during the summer, so at least I have, by the next MAT session, enough data, I might not know what they mean by then, but I have enough data so I can give a quick update.
AUDIENCE SPEAKER: Are you familiar with the work from Emi Laven, from RIPE about IPv6 density? He actually wrote an article for the RIPE Labs, about the latency for v4 against IPv6, and he found out the IPv6 network was less mature so less number of connections, so, he might have some useful information for you on that matter. Also, you should check the work from Matthew Lackey from CAIDA, he was working on basic stuff dedicated at CAIDA looking for density issues in path and things like that.
CHRISTIAN KAUFMANN: The second one I haven't seen. The first one I have seen ?? well at least kind of skimmed through it, and it is on my reference list. So, yeah, I will have a look again. Thanks.
AUDIENCE SPEAKER: Jan Zorz, speaking as a GO?SIX Chair hat on. We are running the GO?SIX lab in my country and I was trying to do similar thing as you are trying to do and if you need the data or BGP analysis or anything we can work together, I can provide the data from my part of the world.
CHRISTIAN KAUFMANN: Sounds good, I will come back to you, thanks.
CHAIR: Christian I hope that was some helpful feedback, it sounds like there is some support here and let's thank Christian for his talk.
(Applause)
So I think next on the agenda is Trevor Burbridge. Trevor is from the group that's working with the IETF in an effort called L?Map for a large scale measurement of something, access network performance I think. The idea is to get a big scale look at how access networks in particular are performing. So, I'll pass it over to Trevor at that point.
TREVOR BURBRIDGE: Good afternoon. As it says my name is Trevor Burbridge from BT research. I am going to bit wider than talking about the IETF. I'll also mention a few of the activities that are at the broadband forum and acknowledge some of that work is ongoing as part the a European project.
Now the European small print down there says I can't speak for the EU. Neither can I really speak officially for the IETF broadband forum. Partly because I have no formal capacity there, even more so because the work has only just started and for example in the IETF we don't even have a charter yet. So, everything I'm going to say here is where I think things are going and some of the early discussions and some of the what might or might not be early decisions but everything is yet to change and yet to decide.
So, just a quick overview of the three areas. So, in the IETF, there is two groups of particular interest. So the ones being ongoing for a while now is IP P M, their remit is to look it the standardisation of measurements. What was felt to be missing from that group was really looking at the overall framework so we have a set of measurements, how do we control them, how do we configure them, should them, how do we get them to report back in a way that could be meaningful to multiple management systems?
So that's really where LMAP was coming from, we have the first birds of a feather at the last IETF and hopefully in the next couple of days we'll have a charter ready for Berlin in the next IETF session there.
The broadband forum is I think working on a very similar activity to LMAP within a group called WT?304. This was set up because within the broadband forum there were point capabilities to do a few tests, so, for example, a way in the data model to trigger off a speed test and a way to retrieve that speed test result, it was felt that that wasn't open, general enough for the configuration of any and all tests. Didn't provide for example the ability to schedule tests rather than just trigger one?off tests and to get back bulk periodic, for example, results from those sort of measurements agents. So that's why WT?304 is there within the broadband forum.
And I just want to mention,ly owney and BT and other partners working in the IETF and broadband forum with at least part funded by an EU project called Leoni. They are working on standards as part but we are also working on measurements themselves, visualisation and management tools that we can take the data from the measurements and to do something useful within the network.
So, one of the things we put together for the IETF was a draft on use cases and within that there is a section on an ISP use case, the IETF work particularly is driven by the FCC in part and also the use case for the end user, so what would the end user in the home or in a business premise want to see from some testing capability?
I'll speak now for the ISP's case.
So why does an ISP want the measurement framework? What sort of test do you want to run for what purpose? Largely it comes into niece different areas. Part it have is detecting problems. We have a lot of management capabilities in our network but they tend to be looking at particular technology, looking at particular segment of the network, looking at a particular layer in the technology. What we find it hard to do sometimes is look at the overall end?to?end piece to see is there a problem that affects an end user. Any problem or any layer any point in the network that has an impact on a customer experience, service or application, we can't always see T by testing from the user premise to different points in the network, indeed up into different services, we hopefully can get a much better visibility of any problem in the network at any point.
The other point, it's not just about problems. It's also about planning and operations. You know, can we see that we have got enough cast in every link and every point in our network for example? That's not necessarily what you'd call fault but through running the right tests we can diagnose that and see if we can got the right planning rules in place, the right past rules in place.
The other things is that in any live network there is a continual iteration, continual role, and we have new services, we have new equipment. Now, as we deploy our equipment of course we validate it within the labs, as it then goes into live deployment can we make sure we have got enough density in the measurement capability that we can really test those new pieces of equipment, those new services and see they are fit for purpose before we roll them out mainstream to customers.
The last point there is really just covering off everything again. So, you know, as an operator, as a network operator, indeed as a service operator, if I was to cross out ISP and put service up there, can I really tell what sort of experience the End Users are getting.
So, from that sort of use case, from that motivation, what do we want from standards? We want a way to measure from premises to multiple points in the network on a large scale. We're talking millions and millions of lines. It's not to say we'll put every test once an hour on everyone of those lines, but maybe some basic tests, we might want to all the time at a higher frequency, some more dedicated tests, we might want to deploy on demand when we think we have a particular problem we need to look at, or indeed just on a single line when a user says they have a problem or where the user just /WABTS wants to look at the problem themselves.
We want it to be /STKART eyes standardised. We can have these tests pre?built in many different devices. Whatever. We also want it standardise ?? and this is where the work more in the IPPM, so we can say, is this test, is this measurement result, comparable to another measurement result? So are we running the same test to the same frequency to the same schedule across the same part of the network as well? So there we need to standardise what we mean by network path as well and define a network path. And I have mentioned before we want it to be able to be deployed on many different devices. We don't want to build in on a protocol only available on core network routers, we want to cover both scheduled and on demand tests. We want to say run this test now but also we want to put out a certain saturation and collect information where we can go back in and diagnose problems that way as well.
So, within all those Leone, broadband forum and LMAPS, we are looking at this sort framework, so IPPM has a concept of a measurement agent. Within LMAP we are saying there is two subsets of measurement agents. There is measurement agents that actually trigger off the tests. They talk to a controller, when it trigger off those tests. They are also the same subset that report back the results and subset of measurements agents just sitting in the network responding to tests. One reason we did that is some of these right?hand side measurement agents may not be dedicated test servers, they might be live services, it would I be a web server or video server or anything else, that wouldn't have inbuilt in it the ability for the control and the collection.
So within the IPPM it's about the test design, how you register that test, so we're talking about registry of tests. When I say I'm doing a test and a short string, we know we're talking about the same test with the same configuration.
Path definition, saying I'm testing from a multihome gateway to the first point in the network where there is no capacity separation between a broadband user line and another broadband user line, that might be another thing we're defining in the path definition.
Within LMAP we are really looking at these two interfaces, the control protocol and the reporting protocol. What information do we need to transmit across those two interfaces and how do we construct them on existing protocols.
Everything else down the bottom here is really I think we are saying it's going to be dependent on the device platform, it's going to be dependent upon the vendor, where the vendor can bring its own know how and value and so on. Within the broadband forum, how you initialise a measurement agent, that might be done within a broadband forum over TR 69, the CWMP protocol from an ACS. Of course it's if a user device, if it's a set top box ?? but other network routers, for example, might be Netcom or something else that would perform that role.
So, what else do we have? Well, within the IETF we expect a certain number of work items. We have already written a few use cases, measurements rather than tests, they are ongoing in the IPPM. We have started looking at how we might do a registry, looking at first draft of half path definitions, there is a first framework draft and terminology draft. The rest of that is to come as well. But all these things even we have started on, as I said it's pre?charter, so they have yet to change anyway. The things we're going to work on now is information model, what information we are transmitting. How we get that information to a data model, adjacenten coding, how we take that data model across the reporting and control protocols, using interface there, and specifically to various platforms such as home gateways how we might do the initialisation as well.
So, what we're trying to do is tread a line between something that's capable of doing everything and something that's simple enough that we can get out and meet the critical demands we have at the moment. We are trying to say okay, how do we scope this problem down and these are the some of the things we are trying to push to scope this down a bit.
What we're saying is there is a single organisation who is responsible for the measurement system, the measurement system means a controller and it means a set of measurement agents who take instructions from that controller. And there is two primary motivations in that. One is there is a single person responsibility in terms of data protection. So collecting the data, not breaking privacy and such and such. But also, they are responsible for the user experience. For example, if I'm controlling a measurement system across a network operators, End Users, I want to be responsible for the end user experience of those End Users. I don't want other people to be able to run tests, for example, without any permission on my home gateways and potentially destroy my user network experience aside from destroying their security and privacy as well.
What we want to say is let's keep it simple. Let's have no explicit communication between different controllers of measurement systems. Let's deploy these things at least in first in isolated domains, and yes, they might share end test points. They might be testing as I say websites. They might be testing as a same measurements agents, but the controllers don't need to talk to each other and the measurement agents that are under the direction of the control don't necessarily need to talk to each other.
What we are trying to say is a single measurement agent only takes directionings from one controller. We don't want to a /O*E say okay I can take tests from you but I can't take this one because I have got a load of tests from /SW?B else he has asked me for this 15910:51 so I don't do this the 10:52. We didn't want to get into that. We want to keep the measurement agent dumb, simple, easily deployable at low cost on a lot of simple devices. That's not to say from the measurement agent when he reports his results he can't report it to multiple parties and collectors. It might report subsets of tests to different parties. It might replicate results to multiple parties as well. That could be collectors under the same domain, same organisation or multiple organisations.
Just to say at the end, because we want to deploy on simple devices, on to many devices, we'd favour a simple control reporting protocol such as just a simple http protocol for that, which has its only implications we don't want to do a lot of things like negotiation of rights and stuff as well.
So I said a lot of it's not decided. Even the work we have done initially. There is still a lot of open technical issues, these are some of the more interesting open technical issues. One thing is when a measurement agent is doing its measurements and its reporting back results, should it have the ability to report back what we call maybe subscriber or line information with that to say I did this test, I did this latency test, here is the results and by the way, this was on a line with this speed, these other characteristics, this sort of traffic shaping policy and so on. So, of course, after we collect the results, we can splice in, we can enhance all the data with that information. Should we take a subset of that information and give it to the measurement agent such that they can report back? And one of the user case for that is if it's reporting back to another party who wouldn't normally have access to that information?
If we do want to do that, should we do it on a driedised protocol or just say that's device specific, using for example the broadband forum framework, we can either, the home gateway will all right know things like sync speed or we can give it certain information. If it has that information you can write a test and that test can gather those characteristics from the environment and report them back in its result. That would be test by test specific, whatever it was to take, or do we standardize it.
Admission control. It's not hard to see what we are building here, it's not too dissimilar to a botnet and could be doing denial service attacks. We want to protect the network itself. We want to protect those end?measurement points. How do we do that? Well. Some tests, such as in IPPM you can sorry I can't serve you notification. If we are testing against web service or live service you don't necessarily get that fine control. You can block it and maybe based on an IP address but you don't necessarily have the capability.
The they are thing is we need the ability to quench these tests so we don't overload the network. How do quench it? Do we quench it from the controller in which case we have to wait for a device to call back in or do we quench it by pushing back, for example, from the network measurements points? And they are open discussions.
The other discussion is around the capabilities. So, what we don't want is the complexity of saying okay, can I run this test on this device, how much CPU has it got left, what are applications is the user currently doing on it? The main these are going to be devices we feel like home gateways we can see we know what's running on, it we know we can run the test on this frequency, we have proved that in the lab. But what we might want to do is just go to them and say what set of tests have you got configured in the stored.
The other part of that problem is should a measurement agent be able to ascertain from a remote measurement agent what tests it can perform? So, if I tried to run a one?way latency test, should that remote point be able to say I can't run that test rather than just saying failed or denied or some other sort of error message? What what sort of complexity do we need in those error messages there.
Just so sum up. Large scale measurement is a hot topic. There are other standard areas where work is going on. And there are numerous academic and research projects going on.
We do feel it's of great benefit potentially in the future, not just to ISPs but also other network providers, other service providers and the End Users themselves and of course one of the reasons I am here is to invite participation in those forums. We have only just started so please come and join us, come and help write the drafts, come and have discussions and let's take this somewhere useful.
CHAIR: Thank you Trevor.
AUDIENCE SPEAKER: Daniel Karrenberg. RIPE NCC, somehow involved with at RIPE Atlas.
So, one thing I didn't get from your presentation and maybe that's because I'm already tired, is, is this project that you're describing just about making standards or is it also about making implementations?
TREVOR BURBRIDGE: The EU Leone project is about implementations. One of the partners there is Sam nose and we are adapting what we have from Sam nose to put additional tests and move some of those interfaces to what might be a standard interface. It's very much building things. Within the testify at the moment we are talking frameworks and ideas and capabilities and requirements. Now at some point hopefully there will be vendors and we'll build something.
DANIEL KARRENBERG: I think the scoping you described is quite sensible.
AUDIENCE SPEAKER: Hi Trevor, it's Andy Davidson. I just think this is a great project because I know from experience that when you are trying to give a consistent level of support across a large number of access circuits, then getting standardized data that is meaningful and looks the same all the time is very difficult to do right now. So, this is a great project. That being said, to make the data when it comes back actionable, it can be really difficult when you are looking at some diagnosis to work out whether the issue that you are having is on some kind of IP or retail or user layer, or whether it's happening at a transport layer, the DSL layer underneath and is it into the scope of this project to allow you to do analysis of the transport too?
TREVOR BURBRIDGE: I think in part is the answer. So, this sort of measurement framework would allow you to run tests over different parts of the network. So, we could run it to different test points up to different services. Any data we get back, we can enhance that with the topology, the assets, that line actually uses as well. So between those two things we should be able to say, well, the problem is on this part, this segment of the network. Now the other part is then to say what the problem is and then I think we fallback in part on existing diagnostic and test systems. Particularly where we are looking at lower layer, we had point of diagnostic systems for those layers. This test says have you got a problem that is affecting End Users? Where is it? And then you start your hunt.
AUDIENCE SPEAKER: Just being able to confirm that an end user's claim is matched with the realities would be useful, but maybe v2, a v2 refresh of the project ?? it's really cheeky of me talking about v2 while v1 is being written but if there was ever some way of identifying that's a rubbish bit of copper with certainty, so that there is some evidence that ?? you understand this ??
TREVOR BURBRIDGE: It's also dependent on device. Where we are doing tests on this from home gateway F that is also the modem, then I suppose reporting back these IP layer test results we can splice in and also report back DSL diagnostics, FTTP diagnostics whatever.
CHAIR: Any further questions? Thank you, Trevor.
This work in the IETF is really pretty early on. It's at its first BoF and it will probably be start to go turn into a Working Group soon. Input from this community would be really valuable of the type that Andy was providing and Daniel was providing F you Google for IETF LMAP that is direct you to a mailing list where discussions are going on.
I think next up is Vesna to give us some updates on RIPE Stat to lead up to mass most's presentation on BGP play.
VESNA MANOIJOVIC: Hi, the RIPE Atlas presentation is coming later, want to talk about RIPE Stat first. So, I want to keep this presentation about RIPE Stat actually quite short. Because we already had a public demo on Monday BoF slot about RIPE Stat. So, I will just give you the highlights in this update so since the previous RIPE Meeting, what is new.
A short introduction. Okay, so first let me see, show of hands, how many of you have ever used RIPE Stat? This is a beautiful picture. So, with RIPE Stat, we are attempting to make the interface to this wealth of data that we collect as RIPE NCC and make it searchable in all kinds of ways for all kinds of audiences, so, it's a one?stop shop, and you can use it either for network trouble shooting or data analysis, and the main interface is the web one, so, this is how it looks like, so it's a bar where you insert the search term, and that's it. So it's as simple as that.
However, some people prefer the command line interface. We have developed that. There is the text server where you can direct your WHOIS lookups if you prefer. There is also a mobile site, so /M for your mobile devices and there is also the data API, if you want to make your own script interact with RIPE Stat.
If you use the web interface in that bar, you can type in IP address, IP range, AS number and since recently, country code or host name. We have also organised and structured the data in a more digestible format so the first thing you get is at a glance. So only four important widgets in our opinion, the most important ones and the rest is organised in tabs based on category. So the routing, database, DNS and so on.
One of the highlights reincluded BGPlay in RIPE Stat so BGPlay is back, this is something that I, as a trainer have heard a lot during training courses, this is a tool that people love, used to love, when we stopped supporting it, they were asking all the time, where is BGPlay? Now it's back, so you will hear from the next speaker about the features of it.
Then we added a lot of support for country comparisons. We have noticed that people like this feature, they like to compete with their neighbouring countries, or to look up any other ?? how is the other country doing? And the historical review of the routing, IPv4 and IPv6 resources per country is a very popular feature and you can compare them and have them all on one page.
And we want to go even further to put all this country comparison in the same widget. This is coming up soon. We have done the realtime monitoring, this the a view of one of the outages in sir a, unfortunately this keeps on happening you can monitor it realtime. You can see when it comes back again and it produces a lot of news worthy information.
And another view of the data that WHOIS database gives you, so the abuse contact finder in RIPE Stat is based on the information in the WHOIS database, but it just gives you a different feel of it, and it gives you also the quality rating as much as we can guess.
So, this is why we still keep it in the beta status and abuse that you saw on up all the stickers and T?shirts at this RIPE Meeting is being implemented. So until that data is actually stored in the RIPE database we are only trying to guess what is the best abuse contact information when you access RIPE Stat.
And surprisingly for us, this is one of the most interesting features for the End Users, so, be they come to the RIPE NCC and they find RIPE Stat and they go, oh my God, somebody from this IP address is attacking me, who can I contact? And if they don't find information they complain to us, so we have to explain a lot so we made a lot of articles, videos, trying to explain to these people how does the Internet work and where can they find abuse information.
Another tab in RIPE Stat shows you activity of the RIPE Atlas measurements and where are these probes located. So if you look up a country, it will show the probes in that country or in any prefix or the AS number.
And we also do reverse DNS consistency, so comparing how is your reverse delegation registration data compared to the actual DNS checks, and the DNS checks are cached so if you want the most up to date information, you can perform interactive check yourself.
And another pretty picture of the DNS chain depending on what do you insert there, you get the forward lookup and then we do the whole cycle, so we do reverse lookup and forward lookup again and then you get this chain.
It's interactive, you can move these points around, but I don't have time and the live demos are not a nice thing to put you through, so try it out.
And if you are curious about the future development, we have published a mode map together with other services of RIPE NCC, the RIPE Stat road map is out there, and in the delivered category, most of these things that I already mentioned, in progress, we still have to update it, it's already delivered, planned and requested features, let us know what are the priorities that you want us to put on these requested features and which ones shall be working on first.
This is the tool that we want to develop in close cooperation with the community. So, please, talk to us.
And my favourite, since I have a stage now I want to repeat this. So, you can do a visual browsing through the WHOIS database information by putting these Lego like objects in, on your screen, we try to demystify the connection between various and very many RIPE database objects, because this is again something from the training courses that we realised people cannot imagine this, they can't visualise it so we did it for them. If you query for an aut?num object it shows you all the other related objects there and then you can click on each one of them, it centres the query on that one and shows you all the relevant objects so you can actually go through the whole database like this
And a similar, but slightly different tool that shows you the hierarchy between different address space objects in the database. So it shows you the upstream provider and the customers as they are registered in the RIPE database and again you can browse, you can go up the hierarchy, to the children, to the siblings, it's fully interactive and it has a lot of data in there but in a graphical form.
So this is all what I have. If you want to give us feedback, this is how you can find us.
And I'll take some questions...
CHAIR: Thank you Vesna.
AUDIENCE SPEAKER: Hello, just a little note. I didn't see it on the slides or I missed it, but you also have an app for your RIPE Stat.
VESNA MANOIJOVIC: Yes, it was not mentioned, I was trying to keep it short, yes we have an mobile app. Thank you. It's good that this comes from the audience. I mentioned the mobile site because this is new and the mobile app we already had before.
AUDIENCE SPEAKER: Tomorrow Smith wireless connect. I really really like the app. It's very helpful. I find some of the sensors that you'd have around the world aren't available and so it kind of shows the statistic that might be of reachability that's quite like a little bit less than you'd like. It's just something I have observed if the use of it. It would be nice that if you can't contact one of the sensors that it just gets removed from the the calculation of reachability, just something that I have observed, but thanks very much, it's really helpful.
VESNA MANOIJOVIC: Thank you for feedback, I'll catch up with you later.
CHAIR: Thanks again Vesna.
(Applause)
And so keeping in the theme of visualising resource information in sort of a RIPE Stat vain, I'll welcome Massimo Candela to talk about the BGPlay visualisation technology which is now reintroduced into RIPE Stat as Vesna has said. I think we have some live visualisation as part of this. We have to switch over computers.
MASSIMO CANDELA: Good afternoon. I am the developer of BGPlay JS, and today I want to talk about this tool. We can start with a bit of history. BGP was created in 2004 by the computer network research group in my university, I come from this research group. It was a Java app late for the visualisation of the routing information about a prefix in that time interval by means of an animated graph. It was hosted for might have fears by the RIPE NCC, and this is currently using the route view project of the University of Oregon.
In these years we created a lot of tools related to network visualisation, like IBGPlay, BGPlay Ireland, historical BGPlay, but this is the past. Now, I want to show you the evolution of BGPlay, BGPlay JS, it's the new web application in pure Java script including the key feature of BGPlay and introducing new once, it's a RIPE Stat widget. It uses the data API provided by the RIPE NCC, it's adjacent format so you can use it also for other proposals, what you want.
We can talk about technical stuff after the demo, live demo. Okay. This is the main window of BGPlay JS. We can use the yellow box to provide resources like aut?num system or prefixes and what we get is the graph. In the graph there is there is a red node, the red node is the aut?num system originating in the private prefix, and blue nodes, the blue nodes are autonomous system containing routers in peering with rows collectors and we have also black nodes that are other aut?num systems involved in this query. And the main feature of BGPlay and BGPlay JS is the possibility to an mate a BGP event, for example when BGP announcement occurs, a new path connecting a set of nodes would be animated. We have different animation for different type of event. For example, for a path change, we have morph between the old route to the new one. And path not involved in path change are collapsed together in a dash of path in order to reduce the complexity of the graph.
In the lower part of the window, there are two time lines, the first shows the number of events over time. And the second one shows the event ordered by time, coloured with different colour related to the type of event. And on both there is red cursor pointing to the current instant. So, clicking on a timeline, we are going to apply all the events between the opposition old position and the new one. It's the same for the second timeline. Okay. And the information panel in the upper part of the window shows information about the last event, and we can get information also about the aut?num systems and we can also focus on a part of graph. It's the same for a path, it's the same. And in the panel, you can set a time interval, and other filters. And we can manage the animation the path change, another path change. And we can also repeat the last event by pressing control, or shift. We can go to the previous event, or to the next one, or we can focus on sub interval of the selected period. And the selection is also on the second timeline. And now the animation will end at the end of the selected period. And this second timeline, during the animation, to keep the current instance visible. The second timeline. And okay, now, I want to show you ?? oh, the layout is automatically computed, but if you want to do a manual tuning, you can do it, you can drag a node, or you can select a set of nodes just by clicking, or you can select a cloud of nodes by double clicking and hold in this way, and the tuning, save it and restore it by this panel, and you can get also the S v? G of the graph, the results option to speed up the animation. I want to show you now more useful use case. This is what happened some days ago to the ?? this autonomous system. And there are two peaks in the first timeline. And before the first peak we have full reachable of the red node and after the first peak, we have a lot of withdrawals and poor reachability. And after the second peak, the reachability is back. Something similar happened yesterday, yesterday morning. We have, again, the reachability after the first peak. We have a poor reachability. And after the second one we have a good reachability.
So, we can come back to the presentation. BGPlay JS, we created many tools for network visualisation and what we had in mind during the creation of BGPlay JS was a lasting framework which simplifies the creation of new tools for the representation of evolving data, not only for networking, but that can be described in terms of graph components.
Without specific external dependences, so, no Java virtual machines, usable also for mobile devices, and the last two points are really important. We thought a specific data set, so you can use it on what you want. So without a server side computation, so you don't need a class of server or a service on a server, just an EPI.
A bit about the framework, it's completely client side in pure Java script. It's based on a stable core composed of nodes and paths and properties. And the functionalities and representation are provided by a set of modules, each of which are completely isolated, communicated by means of events and event segregator, so one of the module is needed and each module provides a specific function. So BGPlay JS is a set of nodes, it's a particular instance of this framework.
In the last days during the RIPE Meeting I received some questions, and this is a list. The first, yes, BGPlay is open source, you can get the code ?? yes, you can visualise your data, you don't need a service side computation, you can use or adjacent format or if you don't want to use it, you can adapt your format with wrapper in client site. You can implement all the visualisation and all the features you want. You implement your module as a backbone JS view.
And of course you can em bed your BGPlay JJ is our web page, like the other AP, widget, you can copy and paste the RIPE embedding code and there are also a set of additional parameters for ?? to specify optional ?? the initial layout, for example, after a manual tuning, or the initial instant, or the possibility to prevent new queries, for example.
That's all. Thank you for your attention.
(Applause)
CHAIR: Questions for Massimo? Such a stunning demo that no one has anything to say.
DANIEL KARRENBERG: I have to say something, I am really ?? I was really stunned when I saw the first office of this, and this actually really works on your phone. It's really, really, really nice. I like it and I'm quite happy that the as the RIPE NCC we could help make this happen.
AUDIENCE SPEAKER: Donal Cunningham from AirSpeed Telecom. On behalf of network engineers everywhere, because I speak for all network engineers, thank you very much for BGPlay because the ability to answer the customer question, what the hell just happened, is a very important one in operations. So, on behalf of the large constituency of network engineers I obviously represent, thank you very much.
CHAIR: Any other questions? Let's thank Massimo one more time.
Our final scheduled talk for today is again Vesna to talk about RIPE Atlas:
VESNA MANOIJOVIC: This is going to be a longer talk so, I don't want to be stuck behind this other microphone.
So, this is, again, introduction to the next session, which is going to be the BoF of the RIPE Atlas community, and pay attention because I don't want to do this again.
So, it's supposed to be an update but I will start again with the history, or actually with geography in this case. This is where we have Atlas probes, so we like this maps which shows we are present everywhere. And these are the actual numbers, so just last week before the RIPE Meeting we reached the 3,000 active probes milestone. And you can see that more than 1,000 of them are on IPv6 actually. We have more than 6,000 users, and quite a big number of them are from the members of the RIPE NCC, from the LIRs.
So, these hosts of the RIPE Atlas probes can do the four types of customised measurements, ping, race route, DNS lookups and SSL fetch. And this is one of the benefits for taking part in RIPE Atlas. So, you don't have to be a member of the RIPE NCC. Anybody can request one of these probes and we are going to ship it to you and your contribution will be to actually keep it plugged in so that we and other members of the RIPE Atlas community can use your probe for the measurements.
And the actual personal operational benefit for you is that you get access to all the probes that our volunteers are hosting and you can do your own measurements.
However, the data is available to you even if you do not host the probe. So the data is publicly available to anybody. We are analysing this data, visualising it and this is the core of the talk of the rest of the slides that I'm going to show.
So, what's new? The newest and the biggest feature is actually for the RIPE NCC members. And it is the quick look measurement. We are trying to make it easier for people to use RIPE Atlas. It's very convenient for us to preach to the converted and we have most of our hosts and volunteers who are geeks and who are actually interested in this and they log into the web interface and schedule very complicated measurements and so on. But the rest of our membership is not exactly like that. So we are trying to make it easier and faster for them to get the results.
So, now in the LIR portal they can go to a page that brings them to the RIPE Atlas and just type in the host name or the IP address, for example, it might be familiar for the people in Ireland, this URL here, and then the ping from 100 random probes is going to start immediately and we are going to visualise the results as they come in.
We have described this in the labs article, and you can see here how does it look on a map. So the colour coded RTT show, you know, how fast can you reach that certain website from which point in Ireland.
And these are are other organisations, so, this is also what you will get for these quick look measurements, so this is how the pings are currently visualised, and we did this prototype for pings, so this is the first implementation and the next one is to offer you also the traceroute and DNS measurements.
Then we visualised the number of probes per country also on such maps and the number, amount of credits, and your usage of credits for performing the measurements is also becoming prettier.
So these are other member specific service that is we already had. You can access them through LIR portal/Atlas, so, quick look, traceroute from all IPv6 capable probes towards a direction or destination of your choice, and that is all, even if you do not have a probe, so you don't have to use any credits for these measurements if you are a member of the RIPE NCC. If you want to perform different kind of measurements, you can get additional credits.
As I mentioned in the RIPE Stat presentation, you can now see, per country, per prefix, per autonomous system, where are these probes geographically on a map? So these are the probes in Ireland. It's going to grow significantly after this RIPE meeting because we have ?? gave out so many of these and there was a lot of Irish people in the meeting, so we are expecting to see a huge growth here.
And this is then a listing of a lot of other new features, so I'll go slowly through each one of them because now I have a lot of time.
One?off measurements. This is what we base the quick look on. So, if you want just a very quick measurement that is not being repeated, then you can just schedule it like that, if you are a probe host.
So then the next one, you can do all this through the API. So you don't have to go through the web interface, there is the API available. People are using it, writing labs articles about it and if you have case study that you have actually implemented by using RIPE Atlas, please let us know, we would like to hear from you and help you publish that on RIPE Labs and share it with other people.
Then another feature that has been requested a lot is group access, group management of your probes. And we started implementing this by enabling sharing of your probe with the colleagues from your LIR. This was easier for us because we know which that group is, we know what are the colleagues from your LIR. The next step is sharing the credits with the people from your LIR, so like I shared account, shared spending account let's say, it has its advantages and disadvantages, we can discuss that in the BoF later on, and then even next step is sharing your probe with the random group of people. So how do we define a group and so on? So this is what we will be busy with, but for now it's very simple, you click on your probe to the configuration details and then you say share and in the drop down menu ?? then you just say share and it gives you your LIR contacts and then you share it with them.
Another thing that was requested a lot was to see the source code, and a few weeks ago we have published the source code, you can download it, you can install it on some other device, you can see what it does, we already received a lot of comments, positive comments like, thank you, this is what we wanted to see. Also some bug reports.
So, again, if you want to contribute, let us know, and we can work together starting from there.
We have some limits implemented on how many measurements can you start at the same time, how many probes can you use in the measurements and so on. This was implemented at the beginning to protect the system from being overloaded and to ensure some kind of fairness. But since we are improving our back ends, the overload is not such a big danger any more so now we are increasing the spending limits so you can do more and more measurements, and if it is still not enough, if you want to do some very specific research with no limits, then you can always approach us and we are either going to give you more credits or we are going to temporarily release those limits to help you out to perform your research.
Visualisations, I already showed them. We are still busy with them so expect some more in the future.
And then we actually redid the whole website too. So if you are not logged in, then the Atlas website looks very much like the rest of the RIPE database.net web pages; and if you are logged in, then you see a personalised dashboard. So this is how it looks now. If you are not logged in, you get this navigation, which gives you a lot of information about RIPE Atlas, how to get involved, results, descriptions text and so on for the new bees, for the people who come for the first time to Atlas and they want to learn more.
But if you do log in, then you see this. So, you see your own measurements, you see your own probes, you see your own credits, and API keys for sharing the measurements with other people and then there are some statistics and some active content there, so links to labs articles and so on, we feature sponsors also on this dashboard.
How many of you like this new look? Nice... and some gymnastics also so that you focus.
We also have the RIPE Atlas anchors. Lots of probes are nice, but sometimes we want more stability and we want more power, so we introduced the anchor boxes which there will be much fewer deployed. The plans for this year is 50, and currently we have 11, and another five are in the process of being bought and deployed. So goal for these anchors is to provide the regional baseline. So, not only to measure a globally important targets like route nameservers, but also to show how is the connectivity in a specific region, a bit more locally. And a cryptic statement, the future history, well, we are just deploying this so that we can collect information now and keep on collecting it so that in five years we can look back and say, okay, this is how the local connectivity changed because the new Internet exchange was introduced in that country or the region or some other events have happened and so that we can actually can show the history in the future. This is what I tried to put on the slide.
So, if you are interested in hosting one of these anchors or what are future plans are, please come to the BoF after this session.
And here they are. New probes. This is the new generation. It's version 3. And we had distributed 200 already during this meeting and we got a new shipment, so if you want more there is some more for tomorrow morning I guess.
What does it do? Well, the first part I won't road, it's a lot of technical details, you can read it for yourself. What it doesn't do is also quite important because some people know that, it P link, blah blah blah, whichever number it is, is actually wireless router, we have changed that. It's not a wireless router any more, it's a RIPE Atlas probe. It doesn't do wireless routing. Don't try to use it as a wireless router. This is something that people keep on saying, oh yeah, what if I switch this little thing here, is it going to, like, do something? It won't do anything, it's a RIPE Atlas probe. It does only what the previous RIPE Atlas probe was doing, which is measurements.
So if you did use RIPE Atlas before and you have an old probe, don't unplug it. Don't try to ship it back to us. Don't ask for a new one, just because there is a new one. You can only get a new one if you promise that you will actually deploy it in another network. This is one of our goals, the diverse topological coverage so you can't have this one if you already have an old one unless you put it in the office while the old one is at home or at your parent's place or with some other friend. So, I guess I stressed this enough.
And there are some pretty pictures over there of the probe.
So, since the last RIPE Meeting, our colleague, Emile has actually tried to make use of this RIPE Atlas network to visualise how certain events affect the Internet. So, on his labs articles, you can find links to these visualisations and the story, what does it all mean? And here you can see how the, you know, some of these lines kind of keep on crossing and stuff. I don't want to dwell on this, but yeah, the traffic actually shifted when the New York networks were affected. But, so that's the local story. The global story is that it affected the connectivity of the other probes which sometimes you would not expect. But actually a lot of probes around the world were affected.
So, for us, it was a success story. And for people who were actually affected by this, well not so much.
Okay, so my favourite slide. What is the community doing? We wanted to provide a permanent home to our road maps so together with all the other services of RIPE NCC, now you can find the RIPE Atlas road map on a dedicated page. We also had the community repository for the tools on gate hub and our researchers have contributed some code that you can just use, those are are the scripts that you can use for analysing the data that you download from RIPE Atlas, so you don't have to write your own script. If you do like them, great; if you think that they should be different, fine, you can change them, they are open source, they are just script. So, please contribute if you have some code so that other people can benefit from your programming too.
Then, on the RIPE Atlas itself, there are community pages which show every month who were the top tenuousers, the biggest spenders, and the people who had their probe up most of the time. And there are also photos. You can also commit your own photos, so if you bring this propose to the hiking trip or somewhere in the mountain and take a photo, well let us know.
We have the sponsorship programme and we have the ambassadors programme for people who want to help us distribute these probes. Again, come to the BoF to hear the details.
And we talk to other interested parties. Then there is a question mark there, yeah, we also have T?shirts, we will give them out at the BoF.
This is our road map. It's a snapshot actually of the web page. There is the URL, please study it, let us know what do you think, what did we forget in this requested column? What is the most important one that we should focus on the next time? So the plan is actually what we are going to work on in the next few months, so after this, we are going to pick up some of these requested features and work on those. So, do let us know what are the most important ones.
Great thanks to our sponsors, they are going to distribute a lot of probes to their customers, and their members, and we just put their logo on the page and now because this new probe is larger, if you're a sponsor, you have a lot of real estate to put your own stickers on this when you are distributing probes to your clients. So think about it.
And a few weeks ago we had a meeting, a regional meeting in south eastern Europe, and that was in Macedonia, and so you can see their beautiful flag which is this red and yellow flower, or Sun, stylised Sun. You can see a lot of them appearing as newcomers to the community. So this is also on the community page once you plug in the probe, then your name is going to appear there with the flag of a country that we think you are in we all know geolocation is not perfect, but we try to get which country you are coming from and then we get these nice screen shots for the next time.
The next conference we are going to show a lot of Irish flags I guess.
And these are the countries that we cover, or at least the first top 20 countries. You can get the data on our website, and if you can help us to place the probes in the AS numbers that we don't have covered and in the countries that we don't have covered, see me later or come to the BoF, we'll talk about that also.
And this is the end of my talk today. Well in time. These are our contact details, so, how do you get the probe/apply. If you want to take part in the mailing list, there is the address. We publish a lot of updates on RIPE Labs, we are active on Twitter, if you want to open a ticket it's Atlas [at] ripe [dot] net. And you can findly easily.
So, questions?
CHAIR: Thank you, Vesna.
AUDIENCE SPEAKER: Dave Friedman from Claranet. I'd like to commend the publishing of the Atlas source code, perhaps I wasn't expecting was a 7 meg tire ball to have to download. Is it possible that the many GIT hub accounts that the RIPE NCC have would publish the source on GIT hub?
VESNA MANOIJOVIC: Yes.
AUDIENCE SPEAKER: Andrei. Thank you very exciting presentation and project. One thing I'd like to see on the road map is ability for Atlas to detect networks that allow spoofing. We had a panel on Tuesday, and one of my Atlas take a ways was that there is, really lack of data. We don't know which network is spoofing and it's very difficult to track. Now, without having this data, I think a lot of discussions surround antispoofing and how to do this and peer pressure is a lot of hot air and I understand it's not as easy just to send a request to you and you would put this on the road map. There is some discussion on the Atlas MAT mailing list that Daniel wrote good e?mail outlining some of the possible issues, but I would like to discuss that. I understand it's not as easy as a ping, but I think as useful.
VESNA MANOIJOVIC: Thank you. And well if the Chairs allow I think we have a few minutes for discussion or we can bring it to the BoF, whatever is preferred.
DANIEL KARRENBERG: Just to check. Who has read the discussion on the mailing list? At least a few.
Okay. Let me repeat it just for the benefit of those who haven't. Let me first say, I'm really interested to get this data. Like, which networks are spoofable. I'm always in personally for some new things doing some new measurements, getting some new insights, and also personally I have been one of the driving forces behind the first antispoofing task force that we had at RIPE so it's a matter that's dear to my heart.
Having said that, RIPE Atlas is also something that's dear to my heart, and I think there are significant risks involved with actually making those probes that, you know, 3,000 people, actually more than 3,000 people around the world host, and we want to gather a couple of thousand more this year, to make them do stuff that shouldn't be done, which is spoofing addresses. And that's basically the only way you can find out whether a network allows this or not.
So, I think there is a significant ?? if we were to decide to do this, there is a significant risk that we might lose the confidence and the trust of those probe hosts, if they find ?? you know, if they should discover that we're doing this or if we even ask can we do this stuff? So, what I would urge us is not to sort of blindly go into, oh yeah, this is interesting, let's just do, it I would like us to carefully consider the risks involved because the worst thing that could happen is that at some point, there is a movement among the hosts to basically losing trust and you all know that when you lose trust, it's something that's much easier to do and much more rapid than building trust. So, the thing is like, you know, to me, there is a choice between no, let's do this, and maybe losing the whole thing that we built, the whole RIPE Atlas. So I want us to be ?? to be conscious about this.
DAVID FREEDMAN: I agree with Daniel. I think that I would also really, really like this data. I would really like this data. The problem is, though, that spoofing traffic kind of violates my acceptable use policy. And if somebody asks me as an Atlas probe operator, would I give consent for the probe to make a spoofing test to see whether my network was spoofable, I'd say yes, while configuring UPF on the interface. So, really, we do need a much better defined technical solution to this. But I do think that if somebody asked me the question a different way, would I give consent for my Atlas probe to experiment perhaps by sending mall formed or not accepted packets for various experiments which may go to highlight perhaps vulnerabilities in my network and that was information was kept confidential, I might perhaps be a bit more amenable to that, being that I wouldn't know exactly what I was supposed to be hiding from it, and I might learn a few things in the process as well when the results from published that's all I have to say.
AUDIENCE SPEAKER: I agree with both guys there, I agree in principle with maybe an experimental version of the Atlas that would allow for security research and perhaps a special allocation of addresses that wouldn't harm anyone if they were spoofed would probably be important stipulation on that one. But also, it might be useful as well, would be, if possible, I haven't read the full manual to apologies to all, but it would be useful to have some sort of alerting functionality that if your probe is off line to some core nodes that you control that you know are fairly well connected that a person could optionally subscribe and get an e?mail alert saying plug in the damn thing please or maybe more politely. Thank you.
AUDIENCE SPEAKER: Benno. Just, a short comment. Maybe it's also important to consider, what's the purpose of such a security experiment in the spoofing. And if we conclude, we need some insight into this, maybe we can form or even ask RIPE NCC to run such an experiment for a month, for example, and report on that. Or form a group at the MAT Working Group, get some consensus about which people are in this group, it's acceptable, do this experiment only once, don't give it this tool free, I guess, for everyone. Maybe we get more consensus, more easily consuss if we restrict the number of experiments or restrict the frequency of experiments or a single short experiment, maybe we can discuss this.
DANIEL KARRENBERG: I think we should have a discussion like this on the mailing list. And I think still I think RIPE has the rule like you know, if it isn't on the mailing list is hasn't happened. So you know, if you are interested in this and if you have an opinion about this, then please join that discussion.
There was one tidbit of data that we just God this morning that I forgot to share and it's part of what Benno has said, we want to know what is the goal of this, of any of these kind of additional experiments, and we need to see whether Atlas is actually suitable to do this, and whether there are maybe other means of doing the same thing that are less risksey, this was just giving some seed to the discussion and one of the tidbits that we had, I asked the guy to say actually look in the database and see how many of the Atlas probes that are currently deployed are actually behind NATs. And my intuition is there if you are behind a NAT you are not likely to be able to spoof, and it turns out that it's about more that half of them, so, we are ?? if this is one data point of saying you know, maybe Atlas is not the right thing to do, maybe we should do something like the spoofer project that actually works on horses and things like that. I don't want to have this whole discussion here but it's a wider discussion that we should definitely have.
CHAIR: Daniel one quick followup question. You mentioned the discussion on the mailing list. Do you believe that the right place for that the MAT Working Group mailing list?
DANIEL KARRENBERG: Obviously. It's about measurements and we have this discussion in this room. So I think that's the right place, anybody who is not on that mailing list it's very easy to subscribe you.
CHAIR: Could you maybe send a message to kick that discussion off.
DANIEL KARRENBERG: I actually did. That's what I asked. Actually, I didn't, Alexander did, and I am thank. To him to actually do this to not only stand up in the room and say hey I like this, but actually to write it down, what exactly it is said that he wants to do. And I responded to it and there was one more response, so, there is already three messages. So, we started a thread here. Let me say again, I'm not in principle against this, I am just for caution.
AUDIENCE SPEAKER: There was a discussion on the mailing list where they were surprised at how they were able to get through NAT and it did work. So they were surprised at the sheer number that did successfully go through. Would I say for me personally as somebody that's about to run a bunch of them, let me opt?in or let me opt out and I don't really see a reason why we need to be kicking off experimental tests, and my opinion once you opt?in and it's approved, we should be just running this thing constantly and gathering that data. I agree we do need to be careful but at the same point if people were doing what they were supposed to be doing we wouldn't have to worry about hurting feelings. I am more interested in cleaning up what should have not been broken 20 years ago and chances are it's being used to attack my network.
RANDY BUSH: How many people in this room would opt?in but do not know if they are blocking or not? Exactly. Everybody who would opt?in knows they are blocking, not because they are nefarious but because they thought about it.
DANIEL KARRENBERG: Good point.
CHAIR: With that I think we are a couple of minutes over time so we are bulking the trend at this RIPE Meeting and not finishing early. I'd like to thank Vesna one more time. And with that we are adjourned and I believe the Atlas BoF is here in 30 minutes. So I'll hopefully see a bunch of you guys back to see what to do next with Atlas. Thanks a lot.