Connect with RIPE 58 : Facebook Twitter dopplr del.icio.us RSS linkedin iCal agenda
 The routing session commenced as follows:

SPEAKER: It's time to begin. This is the routing Working Group session at the RIPE 58. If you want to be in the other one, the anti‑abuse, this is not the room. So let's get started. The agenda is pretty full. There is a scribe from RIPE NCC, there is a Jabber scribe also from RIPE NCC, thank you very much for doing that work.

Microphone /ET, state your name whenever you walk up to the microphone, the people who are attending remotely have the benefit of seeing who you are and it's much easier for them if you state your name.

Some time ago we circulated the minutes from RIPE 57, are there any comments that need to be included? No, then they become final at this point.

This is the agenda we have put together for this session. Is there /TPHAEUPB would like to see anything added (anyone that). No, there will be a bit of announcement later. The first speaker on the list is talking to the other speaker in the list, Danny.

DANNY McPHERSON: I am Danny McPherson, I am going to talk for a few minutes about some IBGP scaling stuff and there are things in this talk that are, some of its as a network operator things could you do in your to change some of the effects what have I talk about today. Some of it is implementation stuff, other protocol tweaks that should be made. So there is a slew of stuff here. A little bit of perspective. One of the key things about some of the data you are going to see is most of the research and studies and analysis is based on DFZ size and it's based on any study you see or any sort of academic studies you have seen traditionally were focused on your perspective and extension BGP session and it's very different if you look at the dynamics of IBGP in a network and the number of paths reflected and how many routes you have where and so forth so we are going to talk a little more about that today.

What breaks first. The big thing that you hear about from a routing scaleability prospective is DFZ size and how many routes are there in a routing system and how many of those end up in a FIB and so what is the size of the chips that are doing this and so forth and that is of course a very important issue but there is something else that matters a lot and it's the number of unique routes, not just prefixes in the routing system that, you know, that is important. And the reason it's important is because, you know, the more unique routes you have the more state, more churn, FIB‑io that is going to be affected and so forth. So what we are going to look at here and talk a lot about the is the number of unique of routes as opposed to the number of prefixes in the routing system.

Here is an example that, came from level 3, it's about a decade worth of growth in FIB size, which is more or less, DFZ unique prefix, no more routes that are unique ‑‑ number of prefixes in the routing system, is the red line I think you see this pretty well. And the green line is the number of paths, very dependent upon sort of your network and routing architecture but it's also dependent on things like external interconnection denseness, so the more the more paths you are going to have for a given prefix and that has ‑‑ the number of pathses is actually growing more steeply than the number of unique prefixes.

So, a bit of a busy slide. I think most of that you do routing are network architecture have tried to envision this in some way. And here is my attempt at capturing it. The top left box there is sort of the BGP routing table and you know for each one of your peers you have an adjacent rib end and BGP process and low /KR*EUB best routes routesing table and your ‑‑ those feed in some sort of routing table manager, something that says I am going to prefer ‑‑ a static route or connected route over a route learned through BGP, for example. And so that is what a manager table manager would do. You generate your FIB and says ‑‑ RIB that RIB is extracted and some hardware address information and stuff is added and it's distributed to one or more FIBs. Some of the typical numbers today in the D FC size, a typical backbone network, maybe 300, 350,000, some larger, some half a million. Unique prefixes in sort of your ‑‑ in your FIB. But one of the things that is important is the number of unique paths in the routing system or routes as opposed to number of unique prefixes is most networks today, larger, anywhere from 2 to 6 million paths today in those routers and, you know, ten years ago now, and we had about two‑and‑a‑half million paths in some of our core routers and we changed some stuff on the IBGP sort of topology network architecture side. I am going to talk about a bit that have as well.

One of the things that is interesting any best path change, if you have got 6 million prefixes ‑‑ 6 million paths in your BGP table, any change in a best path means that all of this stuff has to change that, red line that you just see there. Everyone has to change. The more paths you have and the more churn you have the more FIB i.e. you are going to have all the way through this. One of the interesting things on the RIB side, the RIB to the FIB, that is usually not even a single transaction, usually multi‑phase stuff, here is is a route, thank you I acknowledge that and so forth. The I O and bandwidth and CPO required to keep this information updated is significant and the number of paths has a huge impact on the forwarding performance and back plain capacity and all those sort of things.

So why is the number of unique routes increased faster than number of prefixes. One of the biggest things is internal topology and another thing is external interconnection denseness, sort of the Internet gets flatter or the AS parameters as networks become more densely interconnected, the number of available paths is going to grow larger. And so I will illustrate that in this slide.

So you have got A J S E here, one prefix /24 and decide they are going to connect to three ISPs so it's not just three paths, you know 3 unique routes in the routing system, what you end up with is if these ISPs interconnect in ten places is 22 paths at the parameter of your network that just for this one prefix and this one /PHO*EPL AS. This is on the parameter. If you use something like route reflection and use 3, then that multiplies as it comes into the network and I am going to talk about that some more in a minute.

So one of the things that you hear lot of folks say, route reflection you get this implicit aggregate reflection, only advertises a single best path to the rest of the IBGP speakers and that is true. However, most people actually use more than one route reflector in their network and so if you have two or three or you know your route reflectors mirror your physical POP topology then you are going to have however many paths you have, unique paths within a given POP each one is multiplied by the number of routes in the ‑‑ number of route reflectors that you have and so I will illustrate this in just a minute.

So what you see here is is a /REL simple topology where you got a prefix down the bottom (/RAEPL) and a cluster and the blue routers represent aggregation routers where you might connect must /PHERS or data centres and the grey is route reflecters so those would be the interconnect routers for backbone Linx and so forth. And typically you have some physical topology and some model 4 that you have in a given POP within a cluster and so that bottom left cloud would be a cluster for example, and the grey would be route reflectors and they would be IBGP peered with the route reflectors in the other POPs. If you learn one prefix, P /24 in this case on one of those clients, he would tell us three route reflectors and those would tell everyone else so everybody inside that IBGP mesh would have 3 copies that have prefix from the 3 route reflectors from the one POP. So if you multi‑home to another cusser then that is 3 more that is going to be in the core so now you have six in every one of those ‑‑ 6 copies of the same prefix because of the route reflector topology. So I will illustrate this some more.

So you know one of the other things ‑‑ actually, I am not going to ‑‑ yes, so one of the other things is that you get a lot of gratuitous updates with route reflectors and one of the things we have been looking at, there is going to be a paper on IMC I am working on and during the busiest times of BGP processing, during the top.01 percent, 97 percent of all the up dates you receive from an average peer duplicates, and if you are receiving these route flab dampening and any other policy would apply to each one of those updates. So in a minute I will illustrate for you how route reflectors inside of a network or any non‑transitive attribute like next /HO*P or cluster lists or BGP MEDS, if any of those change inside your network and goes out to the perimeter, you are going to send duplicate updates and you don't know why it changes externally. But things like. To illustrate this for you. You have got this prefix and a client down here on the bottom and he advertises a given prefix to his three route reflectors and the ‑‑ each of the route reflectors decide that the blue path is the best, right. So what happens is you know when one of these go away like the blue route reflector even though he may not be in the forwarding path then they select a new one. It doesn't seem like a big deal, hey I had some IBGP topology change not a big deal. What happens is in implementation today is it results in that edge router on the other side of the network sending 3 updates or five or maybe more externally as a result of these attributes that changed inside the network and so it causes lots of duplicates so that route could be suppressed at the egress point of the network as a result of that. One of the things I noted, during busy times and we have got a lot of stats on this we are going to public, 97 percent of the up dates that you receive during the busiest procession times are exact duplicates and it's for reasons like this, some non‑transitive BGP attribute like local /PR*EF or next /HO*P or cluster list or whatever changes and as a result of that, you send a gratuitous external updates.

Greg: Duplicate paths or prefixes?

SPEAKER: It's an exact copy of update. From the /PERP of the BGP peer it's exact copy of the attributes that change, that generated the attributes ‑‑ that changed that cause add new BGP path selection to happen, were stripped off before you advertised it externally but the implementation is not smart enough to say I better not send this. There are some things from a BGP perspective that, this all happened inside your network. For example, what cluster list changes if, the contents cluster list change then a route reflector might still send a new copy of an update as a result of that and so it's inefficiencies that be thank can be optimised in implementation or protocol.

So impact of updates. This is one of the things I was talking about earlier, CDF and during the busy times if you look at the chart here, all the red is duplicates and the line you can't see there ‑‑ line you can't see there is the unique updates so it's pretty pathetic from a systemic performance perspective, if you are a protocol desiren so we didn't do a great job with this in the real routing system so. This is real data from route views and coupled with level 3 iBGP /O topology. So it's coming the two together to figure out why we got these duplicate updates. And I will have ‑‑ I don't have the chart here. The main reasons were next HOP changes, cluster list changes that result in these sort of instabilities.

One of the things that, really frustrated me, I experienced this when I was at quest again, we had Cisco core routers and Juniper edge routers and one of the things I noticed, so I was doing this exercise where I was figuring out how many paths were on each router from each point of the network and a lot of our routers were seeing extra paths and I didn't understand and it turns out that from RSC 1996 to 2796, there is a change, and when iBGP was initially designed and route reflection was specified, one of the things that happened is a client of a route reflector didn't have to know it was a client and the reason it didn't have to know is because, so that people could deploy route reflection and backwards compatible with clients or iBGP speakers that didn't understand route reflection. If we didn't, they didn't know what an originator ID was or cluster list and they didn't know the poison based on that path vector. Anyway, one of the changes that was made or basically the rule was if I am a route reflector and a client tells me significant tell all my iBGP peers and my clients but I can't tell the client at that told me and the reason was because you would get routing information when that router might install that information. Well, what changed in 2796 it is says, well, let's make an exception here and if a client tells me something now because it's ten years after route reflection was specified, so if a client tells me something and I am a route reflector I can reflect that back to the client and everyone else and expect the /TKHRAOEUPBT /POEUS ten and the client should based on the fact the originator ID equals router ID. So it doesn't seem like a big deal. Why do that? An implementation optimisation if I generate one time and copy it across my peers I have one instruction set and if I don't do that and do it per peer and have to exclude that client then I have got in instructions per peer so it's much more difficult. The problem is that doesn't consider things like systemic state and let me actually illustrate that for you here.

So here is ‑‑ high level toplogical, you have got a prefix, P /24. Each those route reflectors tell their iBGP peers and clients but they also tell the route reflector back just from a design perspective, because he is every ever going to use this information. The only reason he should ever receive anything back that he advertises is an implicit withdrawal because the route that he advertises as a route reflector was a new best path so what happens is all these get dropped on input so those should really never reflect that information back. One of the problems though, it didn't seem like a big deal, if you think how this be /TPHAEUFS a real network, let's take that, the iBGP picture for a minute. (Hehaves in) and what happens is this prefix is advertised, (behaves) it's processed, the route reflectors get it and they all reflect it back. But what happens is this: Is assume that this router, so all iBGP implementations today that are in production service update processing in a round‑robin fashion for each peer. What happens is this: Is that he advertises the route out to the route reflectors and they advertise it back. Assume he has got 100,000 prefixes, this client that we are looking at. So he starts getting in updates so he gets ten in and he starts round‑robin, advertising his updates. What happens is route reflectors reflect them back and get placed in this input processing queue and reflected updates that are coming back get ahead of some of the production updates coming from the EBGP in the first place. That is a horrible thing. You have you have got this stuff you are going to discard and it's being reflected back to you and being processed before production updates in the control path anyway. When you look at it from this perspective it's pretty ugly and what is unfortunate you don't see where this when you show iBGP or route or anything, you don't see these because they are all being discarded and so it's horribly inefficient. So it's one of the things that when you are processing these really busy updates and you lot of what you are saying is duplicates anyway and you couldn't converge traffic because you are processing this garbage that you are going to though away it's really inefficient.

So I gave you this picture. One of the things really interesting is that, for 2547, you have a lot of people want to bridge information between VRFs on the same PE routers and the way is to configure a policy that says if I get a route in on this VPN /TPWREUPBLG and configure policy and share it with these other VRFs on this router, so one of the things that someone wanted to do, they didn't want to have to configure that policy, they wanted to announce the routes to the route reflector and because they are reflected back, say it's being reflected back. Accept it and process it anyway even if the cluster list or originator ID matches my local router ID. We are building up on this bad behave /TKWROER get around a local configuration issue in the first place. And so if the only thing you reflected back from the route reflectors were prefixes that had this, that would be fine but what is going to happen is people that implemented route reflection correctly in the first place are going to change their route reflection behaviour and reflect everything back and only if this ‑‑ process it normally. We are going to force implementations to do even dumber things than they are doing today.

So some of the network architectural considerations: Cluster ID. I don't know how many people in here use route reflection, I have used it in 3 or 4 different networks and I have been involved with a bunch of operators that use T and one of the things that I always thought is that most people used cluster IDs per cluster. If you go back, those 3 would have the same cluster ID and you don't have to do that; as a matter of fact, the cluster ID would default and a lot of people just let it default. And let the information get exchanged between the routers. One of the problems with this is ‑‑ here is that diagram we had earlier and one of the problems is this, these guys are advertising routes they learn from their client down here to iBGP peers back to the client they /THRERPB from but also these are advertising this exact same prefixes to each other and never going to use these. And so they are never going to use these prefixes and it doesn't seem like a big deal but the thing is if you look at this again from the number of paths or systemic perspective, if I get 100,000 routes from this AS down here, external peer, and each one of those are advertised between the route reflectors, that is going to be 200,000 extra adjacent RIB entries or prefixes I am going to have to carry on each of those route reflectors as a result that have so it has a big impact on the memory and utilisation and churn and you are never going to use that. Common cluster IDs are something you should employ.

What else? There is actually lots of other stuff. Placements of peers V customers and is is a big thing so where you ‑‑ where you have reachability and what prefixes, like for example within a cluster those route reflectors are only going to advertise /AOUPT single best route but it's going to be multiplied by the number of route reflectors. So things like that matter. One of the best ways to minimise the number of paths, but the number of route reflectors you use per cluster is important. There are consideration ifs your route reflectors aren't congruent to the physical topology or if a client ‑‑ if a set of route reflectors happen to peer through a client of one of those you can forwarding information, you have got to be pretty particular with that. One of the other things is that those ‑‑ 2 to 6 million /PA*RTS you are seen today, your core routers probably have a lot more paths on them than even your edge routers and then when you start adding things like other address families, IPv6 or IPv6 Ns or using flow spec with iBGP stuff, all these things are important to ‑‑ control same control plain and so if you ‑‑ all these updates that are being reflected back or put in front of that in, front of those, those other network services as well. So just something else to keep in mind.

Another big thing is I see lots of people propose all the time new communities or some new attribute like an advanced IGP matrix and each new community that you transit, for example one of the things I did is rewrite my MED I received on ingress, I didn't use MEDS because MEDS are broken most of the time and rewrite the communities and if you don't do that in your network you are going to have more unique paths, if you look at attribute growth, this is level 3 network over last ten years and every time you have a new attribute means you can't do things like iBGP update packing so you have to send a unique update for every prefix you have and you can't pack them. All have scaleability implications on your routing system.

Routing security so. One of the other things, each feasible path, in other words each I think that makes it sort of adjacent RIB end would have to be validated in some manner and so can I accept this route and from this peer. Each path also means that if you are going to use some BCP 38 or URPF with that for anti‑spoofing, each one of those 2 to 6 million paths has to be a feasible path and some policy has to be applied to forwarding path now and the more paths you have the more scaleability consideration you are going to have from there.

Additional IDR. One of the ironies, there is certainly protocol changes that could be made for some of this stuff, but most of the work that is going on in IDR, is actually associated ‑‑ or aimed at adding more paths or adding new attributes. Nothing is really been done to minimise the number of existing attributes and so it's something to keep in mind, these are huge scaleability considerations and a lot of the stuff floating around the network is not benign, it's being processed in front of your production updates.

What else matters here? Number of unique prefixes in the routing system is what your FIB size is going to be and that is fine but the number of paths means you are going to have X more FIB‑io and everything up you saw in the first diagram. So it's certainly something that you need to consider in the network design and architecture side of the house.

That is pretty much it, actually. So, we are publishing ‑‑ a couple of different papers around some of this to qualify and quantify it in real networks and one of them is going to be at IMC, UCL A has worked on, and then lots of folks reviewed some of the earlier stuff I did with this and on this there are iBGP optimisations and implementations and ways you can architecture network to minimise number of paths or expand it, depending on what you want to acomp accomplish. I know I went pretty quickly. There are a bunch more slides in the slide deck that is posted on‑line.

Greg: You mentioned there at the end attributes being ended to iBGP and we already have implementation issues or protocol design issues that are affecting the scaleability and really functionalability of the Internet at large, when first proposed concerned about operational stability and they kind of bolted on to the side. Same discussion with M S ‑‑ that institutional memory seems to have been lost. And everybody wants to though stuff on iBGP, we need to step back and talk about stabilising what we have and improving it. Why isn't that dialogue taken place? In IETF

SPEAKER: It definitely needs to ‑‑ that is when I start working on this is to try and inform some of that. There are lot of implications lime the implosion inside networks. The other thing that I don't mean to marginalise in any way is all the new address families, I am not against some of that stuff, if it's not iBGP it's gone to be done somewhere, state machine is going to be the same and it's sort of a balancing act but yes, this is something that people need to give more consideration to because it does have a huge impact on scaleability of the system.

AUDIENCE SPEAKER: Google. Can you give some description on what specific implementations you have been observing in this kind of behaviour in terms of reflection.

SPEAKER: With /SKWRAO*UPB I /SK‑FPLT and there are some improvements and we work with folks like pat he will at CISCO, there are some really smart things you can do to minimise 95 percent of those. And in the implementation and it's not a lot of work, you have to add another index and keep track of a couple of things. With production with /SKWRAO*UPB O A, do I have version numbers if you want to see them. I am happy to hook you up with that.


AUDIENCE SPEAKER: I am not asking a question, I am asking are there any more questions for Danny? If not, thank you very much for the presentation.


CHAIR: Greg, you are up next.

GREG SHEPHERD: I know I have got 2 to 3 hours with the material and I had to that can down into 30 minutes. Shout, call me names whatever you need. This really is looking at what we expect V where we are and some objectives that I still have and trying to see how the rest of the world is coming together. I wouldn't necessarily say I have got answers, lots of questions and projections where I think the problem space is and maybe some of the solution space is targeting.

So IPTV has taken off, it's kind of exciting about five years ago suddenly become operational requirements, dusted off old guys of the closet. That was exciting. Really what happened is it flew in the face of what the initial expectations were when first being rolled out back late 80s, early 90, there is expectation global connectivity. We didn't have firewalls, end‑to‑end connectivity in services and multicast was along with that and we had this global video broadcast network and that never happened. And really what we have is these walled gardens of edge providers rolling out IPTV services they want to maintain control of their customer base, provider of all services to them. So what we have is the content has become very regional in the same way the traditional TV networks are very regional which is completely opposite of all other data. Right now when I open Google news, news Linx from around the world and read perspective on US events. But the video space is still frozen that way. Some of these IPTV edge roll‑outs is preserving that and a lot of it has to do with US specifically, long‑term contracts with affiliates and this idea that they draw circle on a map around their attend in a and they own this population of people and starting to age out and deck Kay a little bit and technology is getting in place, different relationships and leverage to open that up. But because the content owner, you know, gosh cost ownership distribution, all relations controlling the competition in this space, it's preventing this opportunity to do things over the top. But again as these contracts are start to go fade, the question is will that last and if not, as it erodes are the opportunities for new services to wedge in there and change the market?

Now this was taken from a customer graph that was given in a presentation I had some years ago, they said I couldn't use the data so I recreated the relationships without real values, because what we are talking about is relationships here. So on the X Axis we have got time, Y would be bit rate. So take any arbitrary unit of time, we have got is 3 bass I can types of traffic coming into your home. Edge network on receiver side. VoIPs increase a little bit. Everyone on the home was on the phone different calls, it wouldn't be ‑‑ video ramped up fairly quickly but aside from what we call the outliers and direct TV has got an an /TP*EL premium packet, a wall fall of tuners, and every TV tuned to different show, that is an out LIR extreme. The average home, 2, 3 max, is going to ramp up and kind of cap at some point even with VoD and queuing in your drive device it's not going to be ‑‑ it's bounded, let's say. Ramps up and starts tapering off. The competition amongst the providers is around bandwidth itself so the money is being through in to compete against the other edge networks and they are /RAETing this opportunity for other services outside of what they think is going their current customer base.

So right now IPTV services is really value added service and but if you look at it it's no different what they got from rabbit ears when it was all analogue. The content hasn't changed much and with the access bandwidth growing, effectively unbounded, it's open up the opportunities for over the top video solutions and we don't see any market leader right now we do see the cracks in the show, we see move networks selling the service out, you don't see their name upfront like YouTube but they are back there providing this over the top video services. We have got BitTorn didn't happen for a long time and then some work happening /THAEUL talk about a little here is AMT, an opportunity ton bridge those networks where they have multicast back ends and try to get out to the networks. As the bandwidth goes up it makes these services even more functional. As they become more functional they start challenging it's IPTV viewing we currently have, and there is going to be some hardware issues. Sitting in front of the TV is not the same as in front of your computer but I don't know what you are like at home, I have got a MAC‑mini sitting next to my TV, we can sit around the television to watch content that is over IP. The DVRs are changing, from Europe, we don't have Tevo here, the idea is you no longer dependent on time synchronization, you come home from wherever were and turn on the box, the content is pre queued for you. You don't care how that got there, it could have been across your cable provider. A BitTorn client. Their VCR still flashes 12, my parents. So if a technology requires them to get off their butts and change something it's not going to happen. They don't care how that data got there. It's funny in the multicast world, that is OK for the geeks that are playing with the technology but the guys who want the content don't give a rat's backside what the transport S they want to get the bits there.

So now this is all great for non‑live content, it makes a lot of sense and even for some live events, for example, I am not a bipping pointed ball fan but I do watch the Super Bowl and play offs when it happens. I don't watch it live. I will be more than happy to work in my shop and queue it up and come back and fastforward through the commercials but in a lot of markets live TV is going to be very critical. The one thing came up during the World Cup event in Germany a few years back, there was a German website that was showing all the different IPTV providers and comparing them by latency, what they didn't want was to have all their friends over to watch the match and hear their neighbours scream goal when they were watching balance cross mid‑feed. We don't have the solid technology to address that on a global scale yet.

So what is the end game? I would like to go back to this idea of ubiquitous, if I want to get a handful of different channels, I shouldn't have to be graced by my local provide tore tell me I can or I can't. I can read New Zealand times in Auckland and not get authorisation from my paper. What /KWR* do we have that same controlling factor in video it shouldn't be the case.

The trouble is we have these multicast connected networks at the edge working really real but very little global multicast peering, still between 6 and 10 K prefixes, now compare to what Danny was showing in the hundreds of thousands, we don't have that kind of connectivity to really make that critical mass go forward. Multicast is known as a proven solution. We have business models built around the edge networks right now. We just need this /TKPWHROEBLG live content to neighbouring happen, otherwise we are all forced to use unicast and the whole scaling matter falls apart. You are used to injecting that into this multicast infrastructure, if you want to get that globally you will be forced to go to one to one stream again.

So the problem, this is again my soapbox, it's my ‑‑ multicast is this all or nothing problem, everybody has to have it turned on or you can't get there. We have seen failed business models in the past, had the idea but didn't have the control. If you give me enable on all your routers we can make it happen. I doubt that is going to happen today. So there is too many people in the food chain to motivate that process. You have got people own the content, are buying transit services from, which could you possibly be Tier1s at the core, edge networks and various layers of them as well vendors making the products at the edge that may not support this and getting that to come together has been insurmountable get to this to happen on global scale. Of the past are forced to use unicast.

Now this is my little pitch. AMT is something I see as an opportunity to really bridge that gap and I can bore you with a little bit of history. Go back to the early days of multicast DR M V P, was the idea of making basically a broadcast domain on top of the first layer 3 at Stanford and in order to do that he knew he didn't have this feature enabled on all the boxes so included all ‑‑ RPF checking with prefix advertisement and it also had tunneling and the best I can check if someone can dig back deeper, I would love T the first instance of encapslation for tunneling I found in any R V C back in 88/89, it's funny as multicast progressed and we moved into protocol independent multicast and used local RIB, we throughout encapslation and tunneling. We would assume everyone would go native. Didn't happen. What pieces do we potentially not deliver and what I am proposing is IGMP should have been multi‑hopped from the beginning. If we had a PIM to do main that we knew was distributing the tease, if IGMP would have found the boundary and got that content back over, over unicast. It would have been possibly DR V M P at the time, what we have today is much more lightweight. /T*P has Anycast address, finds the pin boundary, replicates /‑PB object half of the receives. Because it's using Anycast address to defect the boundary, this can be basically deployed in various locations to increase the radius of deployment. So edge provider, the interest we have seen so far to step back a little bit in cases where the content owner and the provider are the same entities so you don't have to try to put these people together. Edge devices in some parts of the network that don't replicate as well, they also have content want to get to a global audience and looking at running AMT relay services both of their /*EPBLG networks where they have legacy equipment and it looks like unicast to the rest of the world but within their own infrastructure they are the content owners as well for injection, have all the efficiency without having to reinject all the content or to get over the ‑‑ the limitations and legacy equipment in the edge. They continue to push that out. In some cases looking at placing this right next to DSLAM of some kind.

We have got a test implementationings in NX‑OS, the reason it is is because dean know is there and is is a multicast guy so we ramped this code up from some early work and we have it in a one RU, just a text‑box, PC in a /RA*BG we use for deployment testing currently. UT Dallas has written public implementation of Gateway and relay that runs on Linux and on Gateway side alone support for the Mc, windows, and more recently, I am working with /‑PB engineer who is developing Gateway services on the Google for Anne /TKROEU and ‑‑ across their current multicast infrastructure and still get it over ‑‑ to the handsets.

So we have got provider testing taking place as I alluded to with that diagram. There is a /PWO*BGS at Linx right now and also advertising the global Anycast prefix so content that I have got in paw all the toe can get to his receivers as well. The PAIX /SPWOBGS part of ISC.org network. I work for ‑‑ the land of misfit toys and I am fairly misfit so it worked out all right. NetNod has got a box and he is not here, just went down some time ago and I am trying to get him to get his stuff back up, some masty cast boxes in the /PAFPLT. Always looking for more participation. Right now, again the test is take place primarily with this one customer because they have got a service they want to roll out within the next 12 to 18 months and Android stuff is taking a lot of time. Once it's available I think it's going to be open source for everyone to play with.

Now, reality is we have stepped back and I talked about best effort, because I am talking about this broadcast infrastructure and multicast in particular it is ‑‑ UDP. Once it leaves your admin control you don't have any feedback, really, of how successful it was in being received by the eyeballs. Now, in the IPTV deployment at the edge, they own the network end‑to‑end and set autopsy 4 Q policy, separate their traffic to ensure content gets there. If you are doing this really across the global Internet, how do you know it's going to get there? Is the quality of the Internet ready for this global distribution. This comes from Colin Perkins, I was trying to cut this down. Even if I don't talk to them directly you will have them to review on your own and harass Colin. He is looking at doing this and began this initially of just UDP video like and VoIP distribution across the Internet at large with a receiver device or code that just loss and the hope would be to look at standard loss characteristics and if you can quantify them in some way you know what you can do to your flows to protect them, whether it's a lightweight FEC or something more complicated and sophisticated. So it's basically global infrastructure like this. He has got two different edge networks, cable and DSL. And then it's crossing multiple infrastructures. UDP, he has got ‑‑ there is his box and there is his flows. So he has got CBR flows, 3 of them,, 1, 2 and 4 megabits. This is a standard definition. Some VoIP stuff as well in CBR 64 K. One of these is not there. The ‑‑ the cable modem, in this cable network where he had the receiver he didn't' have the capability of receiving 4 MEGS. But he is are currently looking for more people to participate, you can download his Linux code or just put one of these embedded boxes in place. The nature of the beast, he is doing bursts, ten minute bursts and if you look at what is happening in like D V B and even IPTV forums of various flavors, trying to quantity /TPAOEUP acceptable errors. We look it as network operators in terms of time. And time is not relative to a viewer's experience "hey aid ten mill second gap here." They won't talk in terms of percentage. What they want to know did they turn the channel because the viewing experience was so crappy or stay there. More like MTBF, how many visible artifacts per viewing element of time and that was at two /HOURBGS they broke it down to one hour for some sort of unit so artifacts per hour is the metric they are trying to look at. If we are doing ten minute bursts it's hard to gather that and you can extrapolate in some way, if you are talk of lost patterns and a very low probability of a particular loss pattern, you are not going to find if it in hour interval. There is some data to ex trap from there. Red being ADSL and green cable space for the 1‑meg streams. It looks like ADSL was a little bit worse, the packet loss percentages are still very, very low. When you are talking about percentages down below 8 percent like that, overhead FEC can handle it, even /KO*P 3 for random distribution. We need to look deeper inside what the lost patterns are to really make that determination.

So this is packet loss duration and frequency and I have asked him to turn this around as an MTBF so I have got a slide, we will try to make sense sense. This is correlated lost by number of packets A truly random distribution would be single lost packets but an IPv network that is not the case. You are highly likely to lose adjacent packet as well, the events overlap, it's not packet based, it's time, so your correlation is a little bit higher. You can see ones were pretty high, two still up there and as they went down longer bursts tended to fade out, still getting bursts as long 9 packets in a row and that is in the 4 Meg space and I am going to do this in my head at 3 milliseconds per packet, so we are looking at under 30 milliseconds of loss right there at 9. If you look at the DVB analysis of /KO*P three V /RAPTer I think it was roughly at 8 millisecond boundary which seems awfully small. I think 10 ‑‑ well, I have to go by their data, it said 8. If that is the case, we are looking at nearly 30 milliseconds of loss, distributed randomly across ten minutes intervals and still getting them at least one occurred. If one occurs every ten minutes, and it's a loss that you can't compensate with COP 3 and that is happens every hour, that means 6 impairments per hour that you can't repair with cop 3. It's worse than what /KOL sin extrapolating here. Here is just a great geek picture. Show you most of the packets come without interruption, right, but most of them are very short, good runs. So you may have this long burst where nothing else but most of the time it's been chopped up quite a bit. Reordering really happened ‑‑ this was just again geek stuff, dispersion tend to focus around areas of congestion so during peak times dispersion was up, during quay he is enter time, it was down. It's over time of day so you can see that early in the morning there was very little digs /PERGS but by evening dispersion goes way up and there is a psych lick over 24 hours. In the cable space it was actually extreme in both cases where even less dispersion during low use and even more during high use and part of this subjective, I have got my own objectives and that is to understand what we need to do to the transport to protect the content. He is looking at this as a way to learn from that from an operator's perspective, so this may be be more interesting. Less ‑‑ more in the day and repeated as well in 24‑hour cycles so this is daily rate.

This is the one I wanted to get to. This is looking more like MTBF. The 8 burst packet which was 20 milliseconds roughly is the triangle and if I can find them right there past 4/11, ten seconds between events. It's not happening all the time, you are going to exceed that impairments per one hour interval. Either high value FEC which is not free, or look at two dimensional matrixes, we are not going to be able to compensate for this 20 milliseconds.

So summaries: He is saying light weights can, but remember it's not random. Truly random would be single packet losses, you are more likely to lose adjacent packet. And then of course DVB guys looking at /RAO*EUPB is is a real issue, AC‑DOX turn on, going past power systems and when these periodic events take place it takes a big chunk of time out of it. We haven't talked about operational issues. Danny talked about churn and that is loss of connectivity and we are doing a lot of testing and multicast conversions times and IP conversion times and we know this can be tens of milliseconds or hundreds, multiple seconds before you got a new route to the secondary path.

So, most loss is random to a point, we know there is large correlated losses, we didn't seat large operational because the ten minute test intervals, I don't think give a big enough window but even within those 10‑second intervals we saw enough /RAO*EUPB loss at least for me to know that /KA*UT may not be enough (COP) and still compensate for the random loss. This was an old slide and we have a little bit that have right now and the lightweight FECs implemented can do most of it. How do we correct for the large losses? So some of this has happened in the video side. It was fun ‑‑ I am at Cisco, I have been around the /PWHRAOBG little bit, there some years ago with early multicast development and took about a six year competitive analysis sabbatical and now, I am back at Cisco again and it was exciting to come

STEVEN BAKKER: When they acquired scientific at ‑‑ IP guys knew very little about video and we are on the same team. I have learned a lot from these guys and maybe they have learned a little bit from me, we are seeing the video space, they are trying to compensate for what they are learning about the IPv networks, one solution is MDC, multi‑description coding they are really still focused around this idea you control the network. They are breaking the coding video up into various layers and then expecting it to be routed at various locations around the network, if you have one loss in one segment, we are seeing that in the operators side on live live like networks as well and it all seems to be target, so if I /HRAO*S fraction of the data I can recreate that at least from what I do have. It's in the same say that M‑peg works on a temporal scale they are doing this on a spacial scale. They are not exploiting directly, this is just spacial. So I am thinking of ways to exploit both temporal and spacial in the encoding side without expectations with the networks is going to provide you for path diversity or live like feeds. One step before then is S V C is looking at multiple bit rate receivers. Again if I have got an edge network and I have certain available bandwidth, I can encode for that receiver bandwidth that I am /SEFPLT but if I am trying to get to a global audience I don't know what the receiver population is going to be like. I could have handsets, set down boxes, PCs on campus with lots of bandwidth, guys still stuck out in the country like me. As V C is an attempt to provide a layering mechanism so you can adapt dynamically to the receivers bandwidth rate. The trouble is it actually, I believe, makes the content and the encoding more fragile. What we have is a base layer and enhancement layers and the base layer now is the base, like an I frame in M‑peg 2, without that your enhancement layers are worthless. If you have lost a base layer it's gone. Base layers is minimum receive bandwidth and ultimately it's not as efficient as you would need it, seems to be in the current encoding they will do it in blocks like 2 to 3 of bit rate ranges. So to try to put all of this together, what we have envisioned working with scientific Atlanta folks at Cisco is multi‑lattice video encoding which takes into consideration, temporary redundancy and spacial and layering it across both without having to depend on multiple paths or path diversionty of any kind and without segment precedents on layers. If you truly lay this out with multi‑lattice encoding there is no base layer dependency it's independently encodable and because you are transferring this spacial relationship into a temporal domain you really transform any unrecoverable short duration error, being 10, 50, 500 milliseconds, and you can turn that into a recoverable concealable long‑term error and if you encode it cleverly taking into consideration what M‑peg already knows you can ensure that the artifacts are concealable. There are certain things the human eye detects and completely mass, and we have seen the blind spot eye test and such, well, there is lots of things to do in our conceal /THAPLT take into consideration. The vertical and horizontal lines. The human eye picks up on those. Whereas diagonals you don't. If you /TKHAR in the lattice layout in your encoding you can ensure when you have got even 25 to 50 percent loss, the loss occurs in patterns that are easily concealable and undecked by the human eye.

So we have seen practical of 500 milliseconds or more. Start up latency or ‑‑ what we have really ‑‑ it wasn't the initial intention but we have deduced lots of other benefits from this as well, fast channel change, start up latency reduction, ensuring that you actually look at the order of the frames between the lattices and within a GOP to make the transport itself even more resilient so you separate time GOPs by even larger gaps when you have got big block losses of any kind, reduce percentage of fidelity that is lost by any single event.

So now with all that excitement the Internet is dead. If you think back on the IETF, we have done tonnes of work for robust transport protocols. We have got R TP primarily from the UTP space and this is great. But what we found and I have worked with a couple of start ups at that that were playing with this video solution ten plus years ago and they got all excited, had a tool and take it to the customers office to display it to the VP of parking and they couldn't get it because their firewall is blocking everything but port 80, what they found is even these robust protocols working well aren't going to work across the global Internet at large because we have got this policy in place, blocking everything but port 830. All the start to go adopt HDP and again more and moreover head being placed on it. Security is an open window, just because you open that port doesn't mean only good things going to come through and we are making clever ways to come through that hoop. So, goodbye Internet, welcome to the port 80 network.

So the future challenges: What is the end game. I believe there is going to be a global aggregation of content. We have a global market now and audience that is grown up to expect that global reception of content and I don't expect this cable package /STOUF last much longer. That being said I am surprised it lasted as long as it did. We had stumbling blocks along the way. We didn't have the up take in the early video like I expected, it's already happening and what is happening with the content owners now is more and more they are injecting via IP and not expecting the satellite bounces to get content there. With that investment in place there is an opportunity take advantage of that for aggregation. Does this stay in the food chain any more? If there is their value added service to the customers how are they going to stay in the food chain. Again, I don't have answers. I am shake the bee's nest. So we also are /SR*T content owners trying to maintain brand identity. If you go to licensed YouTube to in/SKWROEBGT your content. Everything looks like YouTube. If /TKPWEU to Disney I want to know I am selling your kids ‑‑ people like move don't have their identity upfront, these contracts are based on who owns it and what their expectations are. Who will be these next wave of content providers. I thought it was going to be new stuff. We have the big guys out there who are doing it and making the investments now, they just need to have the patience to erode these old contracts to allow them to get straight to you the consumer and not have to go through a middleman.

Tier 1 still have a play in providing connectivity but this terms of end‑to‑end, not so much. And will AMT make a difference? Probably. And will these firewalls force all the video on to http. It's already happening. You brought up something Dan /TPHAOEFPLT the firewalls we put into provide a service and then we find out it's breaking other stuff so you start doing bad behaviour to get through the broken stuff and this becomes an operational model and all your standards have to adopt this and move forward with it and Danny mentioned it as well, I keep seeing this coming back and it's happening with HD transport as well, not just through the firewalls, we have cache services that work for HDP. If you have got a video service you can lay on top of the cache and you are there. It's kind of like this self reinforcing model. If the cache services start use ago more transport at the core then they are not forced to do that end‑to‑end, they can use an efficient system like multicast to get through firewalls down the edge. All right. Questions.

AUDIENCE SPEAKER: Lorenzo: Interested ‑‑ I am curious, did you or has anyone written an unreliable sequence packet implementation that uses TV P port 80 (C). You could do it by using the TCP sequence number as a packet sequence number and all you need to do is ignore the acts from the other side and somehow get the stack to refuse them. Because if you do that, then the firewall in the middle, to block you, has to essentially recreate this state machine on all the nodes behind it and that sounds like it's very expensive. So as you say, as long as the window is open,.

SPEAKER: You can drive the truck through it. Even if the truck is loaded, right.

Lorenzo: I was curious if you had looked into that

SPEAKER: It's interesting, it's been in the back of my head, I haven't looked into TCP enough to dive through. You are saying on the transmit side don't transmit, just drop those packets


SPEAKER: What would the receive stack do

Lorenzo: If it's behaving the kernel will re‑set your basic TCP ‑‑ it will receive the data.

SPEAKER: Interesting.

AUDIENCE SPEAKER: That will require a route socket on acsuccess systems. I don't know on windows style. But it's not impossible.

SPEAKER: Sure, sure. Let's make bad be /HAEFier to overcome bad behaviour.

AUDIENCE SPEAKER: It's horrible.

AUDIENCE SPEAKER: The problem is it won't help you so much because the evil middle boxes it will only permit HDP, reset and so on and.

Lorenzo: Very expensive to implement, I expect it to be.

AUDIENCE SPEAKER: If we are going to invent new TV P behaviours and fix all the middle boxes on the way we might fix the middle boxes for good and not fiddle with the TCP stacks.


SPEAKER: Like cache server or the firewall itself looking at the TCP flow

AUDIENCE SPEAKER: Specifically the firewalls because your typical end user is looking at TCP state, so if a TCP re‑set comes along, no matter what the end point do the chance that is the firewall will just drop the stuff, is very high. If you are going out to work around broken fire walls by fixing them and breaking TCP you could have fix the firewall which is about as impossible as getting TCP change and fix the firewall.

SPEAKER: If you are trying ‑‑

AUDIENCE SPEAKER: Not the firewall, trying to work around the guy who thinks that only HDP is allowed.

Lorenzo: (Inaudible) it might not maintain sequence number for all the /K*EBGs that are going through T it might be expensive. It's just an idea.

CHAIR: OK. Thank you.


BRUNO QUOITIN: So good after, my name is Bruno from IP networking lab in Belgium. I would like to talk to you about a tool we have developed which is named C‑BGP whose aim is to model the routing, model iBGP route not guilty large scale networks and in this talk I will talk about trying to model an ISP network with this tool (talking about).

So the motivation for developing this tool is that most capacity planning where the network as a branch of nodes interconnected by links, without neighbour ASs. But with that kind of model you are not able to answer questions such as these one where inter‑domain routing and traffic are to be taken to account.

So for example, what will be the impact of peering link failure on my traffic? What if I change a iBGP routing policy on my routing and my traffic (no I) how can I compare their use providers regarding iBGP routing, regarding how my traffic will flow through my peering links. (BGP) you have to build much more complex model where you take into account the interconnection with neighbour ASs.

So for example, you have to take into account peering links, traffic that is not just between your nodes but going to other destinations and there is a lot of prefixes in BGP routing table (BGP is BGP) you have to take account of iBGP, if you have route reflectors, so that is quite complex.

So that is why we developed cBGP /ARBGS tool that takes the configuration of your network and is able to compute the routes BGP would have computed.

I will give you some introduction to the cBGP tool, which is really the routing solver. Then, another tool is SP I N N E T which is used to build model of ISP network connected in the real network. Is spin net is going to build for real network. And I will briefly give an example of cBGP application.

So cBGP is basically two things: Network topology and configuration database. So you will put all your IP equipment configurations there, how they are interconnected. You will put the configuration of your BGP policies of your BGP neighbours and so on and also can inject real BGP routes into the model. Base on that, cBGP are compute the routes that each routers in the model would have computed.

So the output of this computation is called routing state so it's basically the content of the routing table on all the routers in the model.

As an additional step, you can, based on the routing state, forward traffic, so for example you can forward traffic collected with NetFlow on all the routes you have computed and then compute the load of each link in the model.

We have, could have /‑RS, a limitation in the our network (of course) representation so we only focus on layer 3 interconnectivity, so we don't see ethernet and so on.

We can, of course, add some notion of ‑‑ we tried this model with quite large popologist.

Here is an example of how could you script a very simple (topology) network with cBGP so first step you have to describe the topology, here it is very simple. Then configure some BGP stuff, weights, add some BGP router, inject a BGP network, so here one network, 255 /8 would be announceed from router 1.000. Then run the simulation and from that you can start mining the routing state and trace the route from one router to one given destination and the tool would output either the content of routing tables or the different HOPs of your path.

So that is basically what the tool does.

So we have two route computation models in cBGP, one which computes the IGP routers so shortest and we have a more dynamic BGP model where we compute the steady state ‑‑ we compute the routing states of GBP once it has converged. And we support the full BGP decision process, support most of the BGP attributes. We also support route reflectors, of course, and we can inject real routing tables in MRT format into the tool, thanks to, for example, /HR*EUP which is maintained by RIPE.

So, what you can do with this; once you have computed a default routing state based on configuration of your network, you can start playing with the model, and for example, failing lengths or changing policies and compute different routing states which aren't yet the result of these changes in your network. And the goal is to compare these states together to see what happens if I change that stuff in my network.

So now building /KR BGP model of a more complex model is is a difficult task (BGP is BGP) you have to introduce a lot of parameters so we have built a side tool which is based on a lot of parseers and we can introduce real network data and build the C‑BGP model, for example we can parse the outcome of you can run on your favourite equipment, such as, for example, get a dump of the D O PP F links database and we can build the topology from that. Get information from BGP neighbours and build the C‑BGP model all the matly. It will help you visualise your network, so here is a view of the network which dates from I would say March 2007. You can, of course, also talk to the routing solver through some scripting stuff and, for example, traceroute between other comments.

Here, I will two very simple what‑if scenarios: When you have the visualisation of your network, you can change on‑line the state of a link, so fail a link and recompute the routes and see where the traffic would pass between two nodes after the failure. You can also change an IGP weight, so here, for example, I think change the IGP weight of the link between Frankfurt and Vienna and see that my path is now changed. So it was previously going here and now I lowered the IGP weight and the path is changed. So these are very simple examples.

Regarding the BGP data, you can of course access all the computed BGP routes with all their parameters inside the tool. So that is for spin net tool. You can get some statistics of routing table contents and so on.

So now let's move to one case study. The idea of the case study was to see, well, what if I add a new peering to my network, so I want to buy connectivity from another provider or to connect to another peer interconnection point and I would like to see what is the impact on my routing and the impact on my interdomain traffic. So here a very simple example: Let's say I have two providers, X and Y, and this is my initial traffic matrix and I would add to my router C a peering with a new provider Z, and I would like to see what is the impact. For example, what might happen is that a lot of my traffic is attracted by this peering link and my peering link gets overloaded. So the idea is well, let's try to predict this before trying, OK?

So, we made a large scale experiment based, so this experiment is quite old, it's already four years old, but that is data I can show you publicly. So it's based on /SKWRAO* and at that time, four years ago, only 150,000 different BGP prefixes in the BGP routing tables /TPHA* terms of number of different paths, it was around 650,000 routes.

We also had access to NetFlow data which was collected on all the external interfaces of /SKWRAO* and we played a little bit with that.

So here I will show (G E A N T) a very summarised view of the results and I will focus on the links G E A N T has with its commercial providers. At that time had two commercial providers, there were four peering links with one would /PRO* prider and two with the other one. So these are the links that are shown here, and based on the tool, we computed the amount of traffic with the current configuration that was going out through each the link and you can observe that, the distribution of the outgoing traffic on these peering links is highly ‑‑ one was carrying 50 percent of the outgoing traffic. Two of the other links were carrying around 20 percent and the other three peering links were carrying almost nothing, OK? So that is the default routing state based on the initial configuration.

What we did, we decided to suppress some of the peering Linx and see how the traffic would be redistributed with the new routing and we also (links), we have also taken a new prospective providers, so we had full tables and decided to inject it at different places to simulate the injection of new provider. So in the following slides, the scenarios were: We will add a new peering will be named add‑RX, is the router where we add the peering and scenarios where we removed existing peering will be named del‑PRX and PRX is peer router. So that is the scenario of what we did.

In the following slide we show results where we have suppressed PR 1, 2, 3 and 4 and where we have added the full GBP of prospective provider at these other routers (BGP). Everything is summarised on a single figure. So, here, this is the ‑‑ this is the distribution of the outgoing traffic on the 6 links with the provider of GEANNT in the default situation, there is the distribution I showed you a few slides ago and the other columns are just the same kind of distribution but for the different scenarios. So the first four one here are the ones where we have removed a peering. So what we see here, if we remove the peering with the provider with PR 2, with one of the routers of these providers, we see a large fraction of the traffic that was previously carried by PR 2 is now going through the pink one here, which is PR 4. OK so. It hasn't improved, the distribution of the traffic on my peering links. But all the traffic going ‑‑ that was going out through PR 2 is now going out through PR 4. So that might be surprising.

Another interesting one is here, where we have had a full routing table at R 1 and this takes all the traffic that was previously going out through PR 2. So you add a new peering and it will absorb all the traffic going out through another previously existing peering. So that is the kind of experiment you can do before buying connectivity from another provider, you could simulate and try to predict what would be the impact on your routing and on your traffic.

So, as a conclusion, modelling BGP on large networks is quite complex it /SRAOFS lot of data. When we have to do model of Geannt we had to get a lot of data, information on topology and configuration of the routers, NetFlow data. That is a lot. But you can predict some interesting things and we are now trying to evolve our tool from research tool to more operator‑orientated tool, so I would be very interested in your feedback on the back of experiments you would expect from a BGP tool and if that kind of experiments is interesting for you as a network operators. That is what ‑‑ that is why I am here at RIPE, I am very interested by your feedback. So if you have any questions, I will be...

CHAIR: Thank you very much. Does anyone have questions?

AUDIENCE SPEAKER: Swiss Co.. there is something I don't understand. This is very nice to model. What happens if something happens to your outgoing traffic? Can you also model what might happen to your incoming traffic because if ‑‑ this is very nice if you are a big con tonight provider for instance, but most of your traffic is incoming what happens, can you do something with this?

SPEAKER: That is question we usually get. Yes, of course, most of our studies here are focused on the outgoing traffic, of course, it's the easiest part. If you want to get an idea of what happens on your incoming traffic, then you have to also build a model of what is outside your network. And that is still the big challenge. So you can inject into the model some AS level topologies, there are some quite accurate you can always have inaccuracies in these topologies regarding the kind of business relationships you have between ASs and so on but I guess it's possible to do something but the accuracy wouldn't probably not be very good unless we have a more accurate model of the whole Internet. But if you have a small set of ASs for which ‑‑ for example, all the ASs on our interconnection point you could already play with a model of all these ASs if they are eager to cooperate and give some part of their configuration. But it's not easy to get.

AUDIENCE SPEAKER: Google. First of all, it looks very interesting, but I have two questions,: /TP*EURGS, do you understand internal topologies which are using ‑ traffic engineering and second question.

SPEAKER: I didn't hear it well.

AUDIENCE SPEAKER: Your parser, can it parse internal network topologies.

SPEAKER: No we only focus on IGP, so and IS IS.

AUDIENCE SPEAKER: There was mention you can parse routing policies.

SPEAKER: Yes, part of routing policies but it's still an ongoing work. Some parts of the configuration we can parse them.

AUDIENCE SPEAKER: Can you parse complicated routing policies?

SPEAKER: Not too complicated.

AUDIENCE SPEAKER: Your models, are they taking into account differences in terms of implementation of different versions of software?

SPEAKER: No, at the moment we have only a common implementation, so you can't change the behaviour between the nodes ‑‑ the BGP implementation on different nodes on the model.

AUDIENCE SPEAKER:  ‑‑ on /SKWRAO*UPB per is different even if you are not using any configuration parameters. Are you planning to implement some tweaks to the model where we can identify behaviour or BGP best selection to be for this type of configuration?

SPEAKER: We could if there is need for it but until now we have been working with network that were quite homogenous and we had not the need for this. Of course, we can not support with our limited resources all the different limitations of BGP.


SPEAKER: Does that answer your question.


CHAIR: If there are no more questions, then I think that concludes the presentation. Thank you very much.

(Applause) and it also concludes the routing Working Group session for RIPE 58. I have one further announcement, actually belongs with input from other Working Groups, I think. If you have been following what is going on in the Address Policy Working Group, you will have seen lot of discussion about changes to the IPv6 allocation policies and some of them have potential to interfere, to have an impact in the routing tables. So, because these two things seem to be very well tied together, there will be a session tomorrow in the /PHREPBry session that the closing plenary, the first half hour, right after the coffee break in the morning, where we will be talking about how the two things interact together and trying to get some work done on that respect and input from you all. So be there for that discussion. There is all, unless anyone has any other business. No. Thank you for coming and for once we finish on time.