Archives

These are unedited transcripts and may contain errors.


Database Working Group

Wednesday, 15th of May, 2013 at 9ÿa.m.:

WILFRIED WOEBER: So, good morning, everyone. Welcome to the Working Group meeting for database. My name is Wilfried Woeber, together with Nigel Titley and ? we are the three people trying to keep this Working Group on track. I'd like to welcome you, in particular thank you very much to those of you being in this room and not being in the room next door at address policy, so I hope you are really interested in the topics and issues that we are going to briefly discuss today in the morning.

On the screen in front of you, there is the draft agenda, the proposed agenda which usually starts with the administrative stuff, the logistics like the microphone rules that you are welcome at any time to walk up to one of those microphones on the microphone stands in the aisle, and please state your name and if and when appropriate, your affiliation to make it easier for the describes, for the on?line scribes and for the person taking the minutes to make sure who actually contributed.

The next thing is on the list to say thank you to Nigel Titley again for taking notes for this meeting and also for taking care of distributing the minutes of the previous meetings and to manage the action list. And the next thing is to ask for approval of the draft agenda. We are going to have one or two slight modifications to that; one of them is that the brief update on the IETF WEIRDS Working Group that I was hoping to have Olaf Kolkman here to give us two or three minutes update on this issue, unfortunately he has a collision with another important meeting so he won't be here, but maybe someone else can put in one or two minutes, one or two sentences with regard to this. I learned yesterday that there seems to be no major issue with that; it seems to be on track doing its work and we are going to see some results. This is one proposed modification. The other proposed modification is that we are going to have a presentation or statement of interest and problem by Peter regarding the ID N internationalisation, we had this on the table Lar while ago and in some way this is also related to the IETF WEIRDS Working Group activities so we are going to put that in this slot and to round it out, we expect to have ? from the anti?abuse to give us a little bit of background and overview about the abuse contact thing. Any other additions, deletions, modifications to the proposed agenda? This seems not to be the case, so I think this seems to be the list of things we are going to work on. And with that, I'd like to ask Nigel Titley to walk us through the list of open actions. Thank you.

NIGEL TITLEY: Right. This is a quick action point round up, we actually managed to clear all open action points at the last Working Group meeting, so all we have actually to deal with are the fourÿ?? four action points that came out of the, came out of RIPE 65. So, 65.1, RIPE NCC, to liaise with the community to start to develop requirements for a replacement for the org object. This will be dealt with in the RIPE NCC presentation which will follow this one in, fact all of these actions will be dealt with in the presentation to follow this one.

Actually 65.2 to try and raise more interest in the geolocation attribute, having gathered information from possible users.

65.3: Investigate and report on the data protection issues associated with the database history mechanism. As you know there is a history mechanism available in the RIPE database and there are obviously privacy implications in this, RIPE NCC has investigated these.

And finally, 65.4: To ensure that the abuse contact management system that is now in place is properly documented before it's implemented. All four of these have I believe been dealt with and will be addressed in the presentation from the RIPE NCC that follows this. And that, I think, is it.

WILFRIED WOEBER: So, thank you very much, Nigel. And with that, I'd like to put the microphone over to the people from the RIPE NCC, to give the presentations about database update and what is in the pipe, and I am handing it over to you, to also manage the switch over.

KAVEH RANJBAR: Thank you. Good morning, I am manager of RIPE NCC database department. I will give you a quick update on operations and also what we did between RIPE 65 and this meeting and also an update on the action points. After that if there were questions I think we can do that and then we will look into the future.

So, as usual, operation stats, they are available on the mentioned URL. In average, we have about 22,000 queries per minute, that rounds up to about a billion per month. The rate is a little bit higher, has been grown over past six months but the nothing unusual and generally in the department, we are mainly focused on software development and operations of the database so it's five of us in the department and we are involved with running the software in the system. On the down times we had no down times, so the up times was very close to 100 percent for queries and updates.

One of the things that was brought up in the last meeting was reachability and we said we will measure that through Atlas, they are done. I don't have the combined data for you. There were no major outages for reachability stats and things like that.

So that is an update on that. And another thing which is worth mentioning is we did replace the software so we implemented the software and with all the releases, I think we did 38 deployments but with all of that we had no downtime.

Again, this points were brought up also at the last meeting, everything that happens, even the issues that might affect one or two users for a short period of time they are all mentioned on service announcement page and when they are resolved they are archived in the history and we have public release notes for the software which are available in the mentioned URL. So all the features and all the changes and all of that, they are documented.

And as usual, any issue with significant impact on our users will be announced on the mailing list.

So what we did between the past two meetings, we were mainly busy with redevelopment of the Whois software, and all operations are now handled by the new system and all software is fully decommissioned and the new system has helped us to deploy fault tolerant and easy to maintain infrastructure. There are many values with the new software delivered and I won't get into details with that, but mainly it's changes because with the new system, adding new features, fixing box or changes brought up on the mailing list or community they are very simple and the main reason for that is we have a huge amount of tests, we have about 4,000 test cases in different levels from unit tests, integration tests and end?to?end tests and with those tests, anything we change we have to run the tests before making a release, and if the change or the new feature affects anything on the system, in most or almost all cases one of the tests will fail so we know what is affected and then we can either fix the code or see that as one side effect of the change.

And yes, the infrastructure, this was the old system, the main problem with architecture was we used a lot of non?standard protocols, for example NRTM for updating the system and the systems, the crossÿ?? the nodes in the, they were not independent. In the new system, every single node is independent, only thing we have is between the master data store we are isÿ?? everything is dealt with internally so the software is one single package. There is one single artefact and we onlyÿ?? the old system we had a lot of scripts and different codes around the system, like split files, dump files, doing clean?ups and all of the other things; this new system is one integrated system. Internally it's modular but at the end is one single artefact and that helps us a lot to make sure if that artefact runs then we know every service is running and it doesn't matter how many nodes we have in the cluster they automatically find each other and tasks should be scheduled one of them picks up the task, it runs it, if it's not successful it tells another node you have to do this, I wasn't able so. This helps us a lot because when you are decommissioningÿ?? we recommissioned about 50 jobs for example, and all of this are taking care inside the code. So there is no other dependency than just having the code running on one node. And that helped us also to automate complex things because we had a system which was spread around with all the scripts and different things around. Complex behaviour like the reclaim functionality which was recently announced and gives resource holder full power back, that was a manual process because it was complex. We have been able to actually implement that now in the software so to automate that. And that was only possible because we have this integrated system. We can also know to you to decluster with no downtime and that is what is happening usually. Average we deploy every three to four days new release, that is just fixes and improvements and all of that and they are all listed in the release notes, what is changed.

The code is also now open source and available on, released under BSD licence and unlike the old code, it's a complete package so it has everything and it's very easy to install, to come pile the code own only needs Java MySQL get and may have been, I think it might even run on Windows machines but there is 230,000 lines of code, but about 140,000 of that are tests, so we have actually more test code than the actualÿworkingÿcode. But the tests are really helpful and really important. It's not only that we catch any possibility of the changes and things like that, they are also documenting the knowledge for how this system works and what are the usual use cases and things like that. We are going to propose to provide a drop in VM, so users can download and it gets latest version and latest set of data and sets up the NRT automatically, users can have a look at instance of RIPE database. This will be very useful for Telcos who build routing tables because they can get the routing filters from their local, which will be much faster.

So, now I will give you Denis, who will walk you through the action items. And Alex Band, because one of them should beÿ?? Alex is not in the room.

AUDIENCE SPEAKER: I would like to say thank you very much for opening the source code, I think it's really great so I appreciate that.

DENIS WALKER: I am Denis Walker the business analyst for the RIPE NCC database group, I will take you through the action points and tell you what we actually did or not. The first, the last meeting we were talking about not replacing the organisation object but having a new version of an organisation object as a kind of link to something which the RIPE NCC had contracts with and we could very see this was an organisation that we knew about. After we had written this new code and we looked more closely at how easy we can implement new business rules and all the features with the software, we didn't need to have a new organisation object type at all. When we looked at all the resource allocations already have mandatory link to the LARs organisation object and with the, I think 2000?01 we started working on all the PI space and making sure that also will have, when we complete it, links to a organisation object which we know about so really we already have all this in place. And it's basically the same model as we used for abuse, the implementation, all the PA assignments all funnel up through the top level allocation object which has the link to the known organisation object.

So really, all the allocated and assigned resources from the RIPE NCC are linked to an organisation object which we know about and we can trust. So there is no, actually need to have this one. Also, with the tagging facility we have more scope for doing other things and marking objects in different ways, so basically, we suggest dropping this action point or just closing it. And there is also a policy proposal on the discussion about links to sponsoring organisations so I think this is covered in many ways.

Geolocation, we were going to ask Alex Band to say a few words on this but I guess he must be in address policy.

KAVEH RANJBAR: He was the one who was assigned to do this task. As it was discussed, weÿ?? it was also brought up that NCC shouldn't spendÿ?? I mean, if you are busy with other things, it shouldn't be pushed, they shouldn't be pushed aside because of the promotion or raising interest for geolocation services and as far as I know, Alex, that was what he was working on and he had more important thing on his agenda, so he didn't do anything specific in that, in that area except that he has collected some interest, like the e?mails he received and the chat logs and all of that and he has collected them all in one place so he has some statements of interest but that is what has been done in the past six months.

DENIS WALKER: We have released a beta service now which provides access to the historical data for RIPE database objects. We had our legal department look at this and decided that generally, this is OK but we actually excluded all personal data. There were two issues about giving history of personal data, one is the obvious privacy issue but the other one is the fact that until a couple of years ago we used to reuse NIC handles so if you look back at the history of a personal object you are not even looking at the same person so going back over time this can change several time so we decided the safest thing at the moment is to completely exclude personal data from the history. And it looks like we have a question already on this.

PETER KOCH: Just to clarify a question, the slide says there is no personal data objects, you said there is no personal data. The difference I am picking on is the handles in the non?person objects so could you clarify, I apologise for not having looked at the beta test.

DENIS WALKER: We actually exclude personal data objects so there is no history of personal objects or roll objects, but the NIC handles in the other objects will be shown.

PETER KOCH: So if I subsequently go to resolve the NIC handle one by one, I might still experience the issue that I go back to a ?? I retrieve a handle that was correct back then but is no longer today because it was reused?

DENIS WALKER: Yes. If you look at the object from five years ago, DW?RIPE and you look that up today, it may actually be a different person to what it was five years ago. So the object may still exist but be a different person.

PETER KOCH: Is there a caveat anywhere that warns about this?

DENIS WALKER: There will be.

KAVEH RANJBAR: On an article we mentioned, it's good to mention it on each query, NIC handles might have been changed so we will add that.

DENIS WALKER: We also decided not to show deleted objects so basically we only show the history of current objects within the RIPE database. Now, this was not exactly an arbitrary decision, it was a decision we made which obviously is open to discussion and we will take recommendations on it.

One consequence of this is if you create an object, modify it several times then delete it and recreate it and modify it a couple of times, we will give you the history of the object since it was recreated, not previous to that. And any object that you delete will no longer be available.

Now, I can imagine one obvious use case of wanting this is if you accidentally delete an object, you might want to know what it looked like just before you deleted it to recreate it. So, it was just a decision we made, this is a beta service, so if we get any interest in showing deleted objects, then it's not difficult for us to implement that. But you know, we are open to community advice on this.

The last one was concerning the abuseÿ?? question.

AUDIENCE SPEAKER: Just a comment or question. The question about showing the deleted objects. Are you sure it will not be against the retention law? There are lawsÿ?? I am pretty much sure there is a retention law in European Union and Member States, that it's forbid to have the history of certain things like, you know, in Poland, after 12 months right now, the ISPs is enforced to remove the information about theÿ?? about some correlation between IP and user, so unless there are certain other exceptions, but are you sure it's not against the retention law.

KAVEH RANJBAR: I have the answer for that. I already checked that with our legal counsel and it seems, it is there, the retention laws, but that is mainly concerning the private or personal data and we consider only personal roll objects to contain personal data so and since we don't show them, if it's another organisation or route or INET num, they are not covered. The thing; the law in general that is what I gathered from talking to her, it's not clear?cut; I mean especially in our case but in general it's not clear, in any case there is some kind of grey area but that is why we really want to see what community wants. If community prefers to have that I am sure we can try at least to see if it fits and then say, oh based on this interpretation and we can go get some further legal advice. We are enabling the service or we might end up saying no, but the fairest would be to see if community is interested or not and then we will definitely make sure we are not doing something which puts us and the communityÿ??

AUDIENCE SPEAKER: Okay, just be careful with that.

WILFRIED WOEBER: Maybe another commentÿ?? thanks for bringing it up because it's important topic to have a discussion on. My personal understanding is that the, sort of thing that Peter pointed fingers at is not relevant on a one?to?one sense here to this because the data retention stuff is primarily looking at relationship data or traffic data, who was actually communicating with which other party, and I think this is not the communication in itself so I think it would be very at the edge of the data retention thing, but the comment I would have wanted to make, anyway, here, is that there seems to be a general trend to sort of implement a right to forget in the Internet, and sort of this seems to go against that idea, or sort of that expectation, if an object has been deleted there is probably a good reason why it was deleted, so I think we should not try to rebirth the dead, that is one of those things.

And the other things where I would have second thoughts is actually data protection framework, because if the thing is deleted, then there is probably no longer a good reason to have the data in the first place. Whether it's sitting still somewhere in the log file or in the backup tape or whether you can find ways to dig it up again, is a different technical story but fringe a user interface point of view personally, not as a database working Chair but I would rather advise to not dig up the dead, but that is just me.

DENIS WALKER: Do you think that applies to operational data as well as personal data?

WILFRIED WOEBER: Only related to personal data, probably with less bad feelings about operational data. So whether a particular prefix has been in a routing registry is probably sort of easy to argue in favour of, to dig that up and to correlate that with other services of its history of the Internet, but different person or an object has been deleted, then I think we should leave it at that. That is just the start of the discussion.

DENIS WALKER: Well, as I say, we don't actually show any history of personal data at all, personal data objects, but we do show the links to them. So the last action point concern the implementation and deployment of abuse, this is kind of historic now, but we did publish an implementation plan and impact analysis on RIPE Labs, we did a detailed explanation of how to actually implement abuse C on RIPE Labs, we sent announcements to various Working Groups at the time, it's now deployed and we have good quite a good utilisation of it already, 25% of allocations, 35% in total of IP addresses is already covered by abuse C handles. So we have had quite a good take?up on this. There will be a little bit more said about this in the Anti?Abuse Working Group as well.

So I think that concludes the action point. I will give you back to Kaveh Ranjbar who will carry on with the progress we are going to make.

KAVEH RANJBAR: Thank you, Denis. So, I think there are no questions about the first part so now we will go to ongoings.

First, discuss things that are open on the table right now, that is the things which are sent to mailing list recently or just now. So just past few days. One of them is the API so that is what we are working on at the moment, it's almost done but not yet re?employed. We have redeveloped the AP I code, we had one but there were too many issues with that, one of them was, it was a separate pieceÿ?? a separate system, separate piece of code and the second one it was working on pop of port for the 3, everything was translated to RPSL and all the data was translated back toÿ?? the new system works directly with the database core so it's much faster and a lot more consistent and covers all of our services. It streams resource unlike the old API, that was one of the issues old developers had, especially if they wanted huge numbers, they had to wait until the document was fully prepared and then they got it. But now it streams on the fly. So the behaviour is kind of same as using Port 43. And nice feature it has also the framework views has self documenting features so what it doesÿ?? any method that we add to that API, it automatically generates STM on documentation for that, the system. Obviously we provide some input like examples or how to use but the documentation is complete, it has samples of input data, of format of the output data and all correlation between the methods. So any method that we had will be visible there and developers can navigate. This develops them to discover and use all the possibilities when working with the system.

History of objects, it was also brought up before. The main value for the service, the main values we see was management and recovery of objects in some cases when there are changes, deletion is still a question. And also investigation actually in last Friday there was a question for some stats which I was able to do using the history service on the mailing list so it already shows that it's useful and I see some queries coming in for the history. It's still in beta, as discussed on lists, we want to get feedback if, users want we can add more features, to show diff between different versions of the objects, for big objects it might be useful and it's available through all of our access methods, Port 43 API.

And here is an example, what you see on the left is for, this is the organisation, one of the RIPE NCC organisation objects, on left that is the list versions command which shows all version lists. And on the right it shows version one of object which was from March 2012. So that is how it works.

Another proposal which is open on the list is improvements on dumbification. When we started that for the, for different parts of this,s done on NRT N feed, when it was brought up, we implemented it in a quite let's say intrusive way, we dumbified almost everything we thought might have some kind of, might include or refer to some kind of personal da
Ta. We also removed all the links so if there was anÿ?? with a NIC handle we changed that to static text. All the e?mail addresses were changed to static text and all of that, which is fine, but looking back at the situation, a lot of this data is still available with a one single query, for example all the reference to specific NIC handle. So, and on the other hand especially from researches we have many requests that we say we don't want the personal data but you dumbified too much, you kill all the links between data so the data sets that you publish is useless for us so we have to do queries, and with queries they don't get blocked because they really don't look for the personal data but this is still much simpler for them to download the file nightly.

So, we came up with a proposal which is linked also here, and it was sent to the list. What we are proposing is to, on left is current structure, as you can see for example the admin C, they are both replaced with DUMY?RIPE. On the right that is what we are proposing, we are proposing to keep the links to reference toss NIC handles for example, as you can see put a red pointer there. And for theÿ?? for example, for the notify or changed, the e?mail addresses we are proposing to keep the come to main part so just obfuscate the local part and same for the changed line. And there are also other things which are detailed in the proposal. The addresses we are proposing if the addresses are more than one line, keep the last line, so if there are four line of address, obfuscate the first three and keep the fourth, because most of the cases last line contains country or city and researchers use that kind of data, or the phone numbers we are proposing to obfuscate only second half of digits so keep the first half of digits.

It's still being discussed and if anyone present in the room wants to discuss it further but it's a proposal based on what we received mainly from search.

And the last open proposal on the list is the tags. So there is a lot of operational method data, linked to database objects, some of them are possible to infer, some are them are not, but we don't publish them at all. As an example the resources which are from RIPE region, any resource that is handled by RIPE NCC managed with RIPE NCC, that is a simple example. Users can find outÿ?? I mean, they can infer that it's from RIPE NCC if you look at the hierarchy and see who is on top of that or if it's listed in the stat file for the RIPE NCC. But we can simply put a tag on the to be and say this is RIPE region resource or ARIN region and it's in our database because we have import it had through G RS, say this is a child of rhyme, RIPE region user resource and there can be many other things. Tags, what we think about tags is they can cover different things, I mean for example we have automatic clean?up, we can say this object is a candidate for automatic clean?up in 20 days or we can say, for name server list checks, if you want, if the community wants to implement that because I heard some discussion about that, then the tags actually can be a good way of extending this data because if you don't want to send e?mails we can add a tag and say last week we did a check and this serve doesn't have it or it was not available. And same for our registry department, they canÿ?? nowadays they do assisted registry checks so we can actually tag an object with, saying, oh, we went through the documentation for this organisation or this resource or anything and then that was OK. Obviously, each tag should be discussed so anywhere we want to propose to add a tag or something it should be discussed with the community but it will provide an informal way of providing this informational method data which comes with objects and we think that the main value is they will be very useful for data clean up because the ones that we are proposing and there are some examples in the proposal, they can be used easily to tag the objects that might not be useful or the user might want to change them or they have syntax issues for example, and the way to use them is by default we thought that in the proposal we are proposing no change to the existing behaviour but it users want they can add an option to see the tags along with objects and they can also filter out the results, say, okay, I want to see the results only if they have include this tag, if they are only from RIPE region or if they areÿ?? if they don't have this tag. So, it's sent out I think two weeks ago but I didn't get any feedback on that.



WILFRIED WOEBER: May I just go back to the dumbification. I think it's a good idea to sort of not be overaggressive and thus making data set almost useless for some parties.

But while you were talking, I got the feeling that this is a slight improvement but it's probably notÿ?? sort of, it's not a clean solution in the sense that you might, by retaining a subset of the real data and obfuscating or deleting the remaining part completely, you may actually create relationship or seemingly existing relationships that are not there, so, this sort of this might make the data set less reliable or less correct with regard to interpretation on one hand, and on the other hand, disclosing, for example, the domain, that just deleting the local part would probably give the spammers already a good lead in sort of doing clever things to actually reach a mailbox that does exist.

So, I am not really feeling very comfortable with that suggestion. It's better than what we have now but to my knowledge there are mechanisms already known and deployed to do real anon?misation in the sense that you replace the complete real tag with a unique artificial tag, which does not have a real relationship to the real data but still retains completely and correctly the relationship information. Do you think that could be one way of looking at it in the future? Just an idea.

KAVEH RANJBAR: Two things here: One, Peter also mentioned that on the list before, that yes, it's possible to infer the data, but if you ask me and it's just my personal opinion, as I said this proposal we really, there is nothing from NCC side that we think is better or worse, it's just some of the users wanted it so we thought we should act ton and propose something but personally I think my take is, the thing this is available publically so any effort which someone wants to put in in inferring something or this might interpreting the object they can just, just one query and you get the data. So, I don't think, I mean in this case I don't think that but that is my personal opinion, I might be wrong.

The other side for about putting completely different links but maintaining those, it is possible, but the thing is the database dump is supposed to be a view of the RIPE database. I mean, there are other people who import our data, the dumps and this provide copy of RIPE database it's dumbified but especially for routing it's completely useful so I don't thinkÿ?? then that will break their use case because you don't want to go to, for example, merit database and get a copy of her route which is for NIC handles and doesn't work for RIPE, for that one personally I think that will break some of the existing behaviour.

PETER KOCH: I wasn't going to repeat what I said in the mailing list but Wilfried's contribution made me stand up and explain a bit or expand on that. I think the problem we are facing here yeah, there is this user community... that has certain desires, it is arguable whether these are compatible with the legal purpose of the RIPE database or compatible or in line with a mission ways subtle difference. And of course, they are most interested in being able to relate these various objects, whereas the, like, data protection perspective would suggest that this is actually not to happen. Now, what this boils down to is, I think, we are kind of having moving targets in terms of what are the design goals of this dummification, and maybe we need to take a step back and look at these design goals and have as, I suggested on the list as well, have a, say, tangible assessment of the security, whatever aspects, before proposing or deciding upon a particular change to this whole thing.

DENIS WALKER: Yes.

PETER KOCH: That doesn't mean postpone indefinitely, but it makes it hard to argue with moving target goals here.

KAVEH RANJBAR: Yes. Back to the community, I think it's up to Chairs to decide. Thanks.

OK, so the things they are not yet announced but they might have been discussed but just quickly going through them as well. Automatic cleanup, we have always had it, it has been started as well, 2005, Denis should know the number, I think it's about 2005. So person, role maintainer and key cert and org objects are cleaned up, if they are not maintained with any routing related object, after 90 days they are automatically cleaned up. This is related more to the tags but we also want to add a tag to the object which is candidate for clean?up and because it's not referenced anywhere, it will be deleted at this stage. So the users can actually see, oh, this object might be deleted so they can take action if they want.

And what we want to add is to also this was discussed in the initial discussions when this unreferenced clean?up object was brought up, six, seven years ago, but it was never implemented properly. What we want to do is actually include a cluster of objects because sometimes people create personal and role or they have back references and it's even bigger than two, it might be three or four and we don't clean them up, so we want to include that to take the clusters and also delete those unreferenced cluster objects.

Another thing is place holder clean?up, this is something we want to propose. The main, the main problem with the current, the place holder thing is the a lot of clutter, especially for users who don't know that much about how the hierarchy system works, a good example is 0/0, it shows up in the search in the query results or AS block objects and they don't include useful objects, the software needed them, now we don't need them any more and we don't have to display them, so we want to propose that, but there are also some, like for example AS?BLOCKS, they have a good use, there is a good use case for them. The main thing is, we say if the resource is not ours we say but this block is from LACNIC so you might want to go to LACNIC and search, we have a list of all AS?BLOCKS, even if they are from other RIRs.

So what we want to do, we already have the service GRS, we import all the data from other RIRs. The change we are going to make is to actually, when we import data from other RIRs, instead of importing everything that they have, which is what we are doing nowadays, we want to look at their published stat file, others have this public file which they publish and it contains all the resources they manage, so from their point of view that is their resources, so we want to change the import process to also get that file and look first if the resource is theirs then we import it from their database and/or any children for that resource. That means that we can have one global source, which will include RIPEÿ?? RIPE region data plus rest of the world and in general everything should be unique there and we know there are some but very small number of conflicts but that is what RS managers of different RIRs do and almost every meeting they meet and resolve these things. And that database will have a global view of what is there in the Internet. So, we want to do this first and then come up with the removal of the place holders and that should, that should perfectly resolve, I mean the small good benefit of having those objects have, that should change that, more including.

Yes, and based on top of that, another idea we have is providing an easy Whois service. The main thing that we have right now is the main issue, especially on the web, about 32% of the traffic we get on queries are on web and most of the web users are coming there from pop?ups, from their security software or something like a that and they come to RIPE, most of the software they have static links to our database page. And when they come to our page, they query for a resource, but they see a lot of data, sometimes the place holders, most of the cases they also saw the route object and see all other things, and they either don't understand, so leave it so sometimes they contact completely wrong people, like route object and all of that because these users they don't know how the system works and inter numbering systems work and all of that. Because of this situation, we thought, oh, if you remove the place holders and if you have that global data set what we can provide is to provide a simple search box, they type in a resource and what they get back is oh, this resource is from American registry for Internet numbers, ARIN, and we see there is an address contact for that, that is the address, and it seems it's registered to this organisation so not even provide data in RPSL but in very readable web format. So we thought that could be the main entry point for our web interface and then we can obviously have and we will have the usual one which provides RFP S N and has options and all of that, the basic and default one we want to make it simple, show the basic set that normal web users are looking for.

And another thing which is not yet proposed, but I wanted to check this first with the community because it came up first mainly because of all the help requests we had, this is the most wanted, the most mentioned problem, we have lot of tickets, phone calls in training courses we spend most of our time on that and it's the authorisation for route object but route object creation needs authorisation from the ASN holder plus the IP address range holder so there should be these two different there. And this was mainly, I was talking with Denis and based on operational background we had and also just seeing what is happening in reality, we thought the authorisation for the ASN might not be required from the AS holder. We also looked at the discussions in all of that, we might be completely wrong but we didn't also find anything regarding that. And why we think so, one is the route statement which says this network might be advertised from that ASN, so ASN holder is not that much involved; I mean, there is just a might. And the other thing is also in the certification is exactly the same behaviour because when wants creates a ROA what is somehow the same as object, only IP address holder needs to have the certificate for that IP range, they can type in any AS they want but there might be something that we are missing here and I thought even before writing more than that, let's check it with the community and then if it's even viable we can write theÿ??

WILFRIED WOEBER: Is it useful to provide you with some input right now?

KAVEH RANJBAR: Yes, please.

WILFRIED WOEBER: And this is wearing my hat as the network operations team for ROA network, if you do away with the requirement to have the maintainer of the autnum agreed to the creation of a route object on one hand, and if you use the routing registry as the primary source of information based on which you are building the route to filters, you are opening two capabilities; first of all, you give permission, by this machinery you give permission to the INET num and maintain tore write to my routers, to write to my access list and this is one of the major issues. And the other major issue is related to the routing registry, if we change the semantics from, if a route object does exist, then the AS owner has agreed to the semantics of anyone can ask for creation of route objects with relation to a particular AS, and you have this automatic procedure, you can actually do very nasty things with creating route objects for subsets of the address space and announce that from a different AS, and if you are the operator of that particular nasty AS, you can actually inject those focused routes into the routing system and, thus, hijack traffic.

So, I am not saying this is completely out of the question but I think this is something which we should move over to the routing Working Group and this is notÿ?? sort of the machinery is for the database but agreeing on the semantics and on the technical requirements is for the routing, I think. Thanks.

AUDIENCE SPEAKER: It is very timely you should say this, Wilfried, because I have a presentation in the routing workshop on this exact issue, and a slide pack ready to discuss this in the routing workshop, soÿ?? realities and desire will collide.

KAVEH RANJBAR: Single sign?on was also brought up on different occasions, the main value for the users, is ofÿ?? will be for our members. It has other values but mainly will be for our members. Currently, there is a very strange situation which is also hard to explain even to people inside NCC, and the situation is, some of the objects, the registration objects, which are given by NCC to members, they have RIPE NCC maintainer but they can edit some parts of the objects like if it's their LIR, can maintain the address for example or the contact, list on object. In order to be able to do that since we don't have RIPE NCC maintainer passwords, they log into the LIR Portal and they have access to some of their objects which are maintained by RIPE NCC and they can edit parts of those, not the whole object but parts. But for rest of the their objects they have to go to use database objects sync or mail or API and it's kind of split, and causes a lot of confusion. One of the cases is the abuse C because they needed to add on LIR organisation which means they have to log into LIR Portal to add that but to create the ROA they have to use it through web or mail undates and that already, you have two different authentications of that, it can easily resolve that, you automatically get an account which you can log into and manage your portal account in all of that. And that means if integrate that with authorisation in the maintainer, then it can work seamlessly. What we want to do is to propose something which is fully backward compatible and works both ways so it shouldn't mean if you add an SS O then you cannot use mail updates any more or if you are using e?mail updates you cannot log in from your RIPE access panel and RIPE access is an open system so anybody can register so they should be able to link maintainers with RIPE access, even non?members can access that as a convenient way it will mainly affect web tools and API, it won't affect the main updates, for example.

And finally, we want to clean up the documentation for RIPE database. What we have right now especially on the web page and documents, there are many different documents and it's hard to navigate, finding things on that on our website called RIPE database is really hard. What we want to do is provide only three sets of documents, one is accessing data, all kind of queries; we want to have very easy to read short and concise set of documents so just defined the behaviour of software. We don't want to get into that much of details or examples or things like that. Same for, another document for only for updates and finally, another document for developers. And then the rest of the thing, I mean we have a lot of material, but what we want to do is to give all of them to our training services. Obviously, we will work with them to help them come up with proper tutorials on all that have. But they provide videos and they have tutorials and leaflets and those things, they will be hand out by training services. One last thing, the documents will be linked to a specific version of the code so you know this documentation is for this version that you are using and it will be up?to?date always with the latest version.

So, that is my presentation. Thank you for your patience.

WILFRIED WOEBER: Thank you very much, both to you and to Denis. Any questions from the audience? Or any other comments that we had already.

BRIAN NISBET: From HEAnet. Not so much a question as you have single sign?on as an idea. From my point of view, please progress this beyond an idea. Single sign?on wherever it may be happening at this point in time is a fantastic notion, so absolutely, I mean, it's not really a big problem for us, it's not something we are logging into multiple different accounts but I think it's something that is vital in, on today's situation. Far too many user names and passwords so the more simple we can make this, the better.

KAVEH RANJBAR: Thank you very much.

WILFRIED WOEBER: OK. Thank you very much. And with that, I'd like to move to the next item on the agenda and that is the bigger issue of internationalisation and redefinition figures of Whois stuff. There is an Avenue Working Group with the acronym WEIRDS that is dealing with, well a redesigning the Whois machinery under a different name and a different acronym. We had a brief wrappup for that one during the previous RIPE meeting, as I learned recently there isn't really interest ? well, not really extremely important stuff going on at the moment that would need our awareness or our reaction. Peter Koch frowns a little bit so I may be wrong. So, I am just hearing from Peter some documents in last call, so obviously this Working Group does make progress, and one of the aspects or one of the mandates for this Working Group is actually to deal with internationalisation, and that is the bridge to Peter's presentation or problem statement or expectation. So the stage is yours.



SPEAKER: My name is PI O T R and I am looking here for advice and the question is simple: To PDP or not? My presentation is simple, it's about the internationalisation of resource registry, it has been presented by me at RIPE 61 so I am not going to make advertise for it now, everything is on the slides from that presentation and probably has been recorded by these guys from RIPE NCC.

It hasn't been implemented yet. For various reasons. I was busy so I was not pushing this idea. And as Wilfried said there is another project on the way, yes, and other things could come to my mind why this is not implemented.

However, I think it's vital for us to have an accurate data in the resource registry, and to internationalisation will help this accuracy. So right now, I am in doubt, either I should go to PDP or not go to PDP. Let me explain that: From my point of view, internationalisation is just a kind of technical implementation of things. It's not more, nothing less. However, I have no idea how to ask RIPE NCC to do that. We as a community can make a policy, but from my perspective, policy which states, OK, database should be international, is thatÿ?? it's odd, it's weird. And I don't think it's the way we should or I should follow, so I ask Nigel and Wilfried what I should do; should I do a PDP or not? And Nigel politely asked me to ask you if this is just an implementation detail and I should ask Denis, and his colleagues to implement that politely, or should I rise that to the PDP and ask aMelia and so on. So that is the question for you or the answer for the, for the rest of the mailing list in the Working Group.

Yes. So, leave itÿ?? the question has a background also, I think we should leave the implementation details to RIPE NCC staff, no more if you go with the PDP or not; there is a lot of ideas in which way we can do that. The new protocol, the U T F extension to current objects, new objects and so on and so on, a lot of possibilities and a lot of work to do to, to check which of the implementation could be the best one. So, that is not the mind problemÿ?? I am going to leave the details to the RIPE NCC. However I want to ask you if I should or should not go with the PDP and that is all.

KAVEH RANJBAR: Just a question to clarify, because I know, I know the problem and we already discussed it; the thing is the technical details as you mentioned, that can be discussed or even if, even if it's not in the policy if you are going with the policy proposal that can be discussed on the list and we can discuss that with the community. Should we have a policy by any other means to enforce people or ask people or let people to have different data sets because, and then if that is the case, can that be the only address, for example, people in Arabic speaking countries, have only address retained in Arabic or do we want an address in English and possibly Arabic, so the main question we had, it was presented before in all of that, was to actually please clarify for us by any means that is required, policy or any other means, how this data, how the actual data should be treated. The technical details will find a way so I am sure with the community weekly as I on that.

SPEAKER: From my perspective there should be at least an English translation. That is obvious for me. However, I still have no idea if I should enforce your implementation by PDP or just by polite asking you.

SHANE KERR: ISC. I don't think this actually falls under the area of things that require PDP.

SPEAKER: The same for me.

SHANE KERR: I don't see how it relates to shared resources or community process or anything like that. So, I don't think there is any strict requirement for going through a PDP. People sometimes use the PDP process for things that don't need it just because it gives them a framework. I probably wouldn't recommend that, basically it's a lot of overhead and work, and it adds a lot of time delay which you probablyÿ?? if it's not needed, then why wait that extra time? So, having said that, I do think that this is exactly the kind of thing that the database group does discuss and come to consensus on and I think that isÿ?? I thinkÿ?? doing discussions in details in public on list is probably required for this; otherwise you are going to get very angry people when the changes get implemented, but I think the RIPE NCC has been very good about, have you seen recently with the changes they proposed, the details are all laid out and exactly what is going to happen is discussed with the community, so I think really that is all that needs to happen.

SPEAKER: I absolutely agree with you, I am not for PDP process but I ask the meeting and ? whether I should follow it or not, so that is the question to ask to the community.

NIALL O'REILLY: PDP insider, I want to add emphasis to what Shane said.

WILFRIED WOEBER: Wilfried here, and with my hat of sometimes having to endure ICANN meetings, with that background. First of all, thanks for bringing this up again because I think this community and not just the Database Working Group subset of the community but the RIPE community as a whole has to again think about those issues. Personally, I also agree that I think the PDP process is overkill, but and that is the big but, I am not sure that we should limit the discussions in this Working Group on the toolset alone or on the technical implementation on the software side so I do see with that little bit of background of how much, how many problems and how much grief this internationalisation gave the names people and they also do have their Whois infrastructure in all those problems that we are just having to a much lesser extent, what sort ofÿ?? what number and types of problems popped up in this environment with translit ration, with having sort of the authoritative local language and script thing, plus translation as you already suggested and that sort of things, my feeling is that we should bring that up on the Services Working Group because the technical informationÿ?? implementation is one end, but the RIPE NCC then would have to manage the sort of the service of supporting this whole internationalised infrastructure and, well, just to start the discussion and maybe we should take it to the mailing list again and maybe let's just talk to the chairpersons of the Services Working Group during the rest of this week how to do that properly and where and I couldÿ?? and you see as I said there is this weird activity in the IETF and there is a parallel activity in ICANN itself and the IETF thing was sparked because part of the community say ICANN is not meant to define protocols or to make technology, that was the reason why it was pushed to the IETF, which also make some grinding sounds as you maybe know. So if we really see a problem that is more complex or bigger than just a technical implementation in the software development department in, we might consider setting up a task force to try to find out what the problem space looks like and then what has to be done. So any other comment?

NIGEL TITLEY: Whose action is that?

WILFRIED WOEBER: Whose action is that? My first reaction would be Piotr, Nigel, myself, to talk to the services folks and try to do that during the rest of this and then come back to the mailing list and of course you are welcome to join in as one of those people responsible collectively for the action.

SPEAKER: OK.

WILFRIED WOEBER: Thanks for bringing it up again. And this takes us to the next item on the agenda and that is a little bit of feedback on the anti?abuse, Brian just, to any detail you want to go into, just give us a roundup on what the situation is. On top of what Kaveh Ranjbar already said that the implementation of the documentation is well on the road.

BRIAN NISBET: So, here with my slightly with my Anti?Abuse Working Group hat on and slightly with myself and Tobias further crazy hat plans on. So Kaveh Ranjbar has already talked about the take?up of abuse C and there will be more in anti?abuse tomorrow, in this room, regarding that and regarding how many people have put the information into the database. And what the plans are to continue to push that to the point where we are turning up at people's houses and telling them they haven't applied abuse C, the NC travel budget, knocking and ringing on door bells it's going to be fantastic. So that is an ongoing effort to get that in place. And so myself and Tobias, we are talking and we wasn't OK, what do we do next, because abuse C was always the first step and the proposal was always just a first step and anyone who didn't notice that clearly wasn't paying attention when we were talking about it. So, now that we have that object in place, now that we have the information, and that is being done in a new way in the database, we now want to progress from there and push it further. So, right now we just have some plans, and this is all very much you know, things we will be bringing to the community and things we want your feedback on and we are going to be presenting it either in database or possible in NCC services because that seems to be the cool place to bring proposals these days as they are rapidly finding out. But explicitly not anti?abuse, because these are things that we want to do without our Working Group Chair hats on, so we want to give them to somebody else's Working Group to have all the fun of cat herding. So the crazy plan now is to look at a proposal relating to data verification of abuse C and we are going to start with that because we think that is the right way to go about it so. Proposalÿ?? put a proposal out to the community to talk about data verification specifically of the abuse C object. We will then in, collaboration with the fantastic people from the NCC database team, and their already existing plans to look at the admin and tech C and to make them more like the abuse C in the way that they are represented in the database, once that rationalisation is completed, we are hoping we can take the proposal we have already written and hopefully that point already passed for abuse C and apply it to the tech C and the admin C. INEX party last night it was very, very good. I am a little congested this morning as our politicians say in this country. So, theÿ?? so that is the aim and we will then get to a point where we have regular automated data verification of certainly abuse C, admin C and tech C and who knows where we can go from there. So that is, that is the crazy plan at the moment. Like I said, myself and Tobias are going to put together an initial policy draft for that and we will start by sending it here and then Wilfried and Nigel will get to decide whether they want to deal with it or whether they think Kurtis and Bijal should deal with it instead but it will be brought forward to the community and there will be more proposals after that. We don't have a lot of detail right now, this was something we decided yesterday, and I had to write down before I went to the social, and then I had to read this morning oh, that looks familiar but I am hoping we can get some words on paper, what do you reckon? Sometime maybe next month, so fairly soon we plan actually put things in place. I don't know if there are any questions or people are already saying we are crazy orÿ?? Shane looks uncertain.

WILFRIED WOEBER: Thank you very much. Any otherÿ??

NIGEL TITLEY: Yes. I'd like to say; you are crazy.

BRIAN NISBET: We are just going to write some stuff down, the NCC will do all the hard work, it's fine.

WILFRIED WOEBER: While you are at the mic row phone you cannot take notes of it.

BRIAN NISBET: So it's not noted that we are crazy.

WILFRIED WOEBER: But we are having the live transcript so it's going to be on record anyway.

BRIAN NISBET: That is true.

WILFRIED WOEBER: Thank you very much. And this takes us to the last item on the agenda, anyone want to bring up anything that we missed. Kaveh Ranjbar?

KAVEH RANJBAR: During the break we will have some abuse C T shirts so you are more than welcome toÿ??

WILFRIED WOEBER: Whoever wears one of those may be abused. OK. Thank you very much everyone here in this room. And I am looking forward to see you again during the Working Group meeting in Athens and you have got a head start for the coffee break. Thank you.