>>davis: okay, i think we are going to tryand get started. if i could get everybody's attention.my name is larry davis, and i am the chair of the computer sciencedepartment, and it's my pleasure and honor to introducethis afternoon's colloquium speaker, vint cerf.so i can only tell you a little bit about vint, because if i told you about all hisaccomplishments, you'd never hear from vint. i'd be up here for an hour.so currently he's vice president and he has the interesting title of chief internet evangelistfor google. google is an internet search company [laughter]--forthose of you who don't know.
and the reason for this is back about 20--25years ago vint and his colleague, bob conn, essentially invented the internet.co-designer of the tcp/ip protocols and the architecture of the internet.and he's been widely recognized for this really incredible contribution.a large part of our economy is depends on the internet these days.in 1997, president clinton presented the us national medal of technology to vint and bobconn for founding and developing the internet. all right.and then they also won what's called the--turing award.in computer science that is the highest award given to anybody for truly fundamental contributionsto computer science.
and then in november of 2005, george bushawarded cert and conn the presidential medal of freedom, which is the highest civilianaward for scientists in the united states. now in today's rapidly expanding government,i am sure that barack obama will invent an even more prestigious award, and you can becertain that the first winner of that award will be vint cerf.and so, it's my pleasure to introduce vint, and welcome to the university of maryland. >>cerf: thank you very much, larry. [applause]thank you for not introducing me as al gore. [laughter]okay. are we in?good.
all right.well first of all, i appreciate the warm welcome. i'm just sort of stunned.the room was empty when we walked in about 20 minutes ago, and so--wow. i hope that i have something useful to say.i have to tell you that this interesting title, chief internet evangelist, was not one thati chose. in fact, when they asked me what title didi want at google, i said "how about archduke?" [laughter]that sounded pretty good to me, but somebody pointed out that the previous archduke wasferdinand, and he was assassinated in 1914, and it caused world war one.maybe it's not a good title to have, so i
was happy to settle for internet evangelist.well, let me start by going back into history a little bit.this is what the predecessor to the internet looked like in 1969.around december four nodes had been installed, and i was lucky to be a graduate student atucla at that time and wrote the software that connected this thing called the sigma 7 upto the first node of the arpanet. the sigma 7 is in a museum now.some people think that i should be there along with it. [laughter]but this system was the beginning of experimenting with packet switching, with which i am sureyou're all very familiar. at the time it was considered a really crazyidea.
if you were part of the telecommunicationsworld, you knew that the way to do telecom was circuit switching.and so, at&t had absolutely no interest whatsoever in this technology, but they were willingto sell us--lease lines that we could build our own packet switch.around 1977, after i went from stanford to the defense department to lead the program,we experimented with getting three different packet switch nets interconnected.by the way, that's why it's called the internet. we had the arpanet, and when bob conn cameout to stanford in the spring of '73 and said, "i have two other nets.one's a mobile radio net. one is a multi-access packet satellite net."and, of course, there is the arpanet, which
bob also worked on.and he said, "the problem is how do i get them all interconnected in a way that looksmore or less uniform?" they were different data rates, differentdelays, different error rates, different packet sizes, and so on.the consequence of all that is that bob and i spent about six months trying to figureout a way to do that, and that is where the tcp/ip protocols came from and the ideas ofgateway. so, we built this 3-network system and, forthe first time, did a demonstration that we could get all three of them to work usingthese new tcp/ip protocols. this was a particularly interesting test.there was a van that was built by sri international
and was driving up and down the bay shorefreeway radiating packets in this mobile packet radio network.the packets were artificially forced to go through a gateway--we didn't know they weresupposed to be called routers then--so we called them gateways--between the packet radionet and the arpanet, and the routing for these internet packets was forced to go throughan internal satellite hop inside the arpanet all the way to norway and then down by landlineto university college london. then hopped out of that international extensionof the arpanet, went into another gateway between the arpanet in england, and the packetsatellite net over the atlantic went through the packet satellite net down to a groundstation in _____(??), west virginia, through
another gateway back into the arpanet andthen all the way across the country down to usc, information sciences institute in marinadel rey. well, if you measure the distance betweenmenlopark--which is where sri was--and isi--it's about 440 miles.but if you measure the path of the packet, it was about 100,000 miles--is it went upand down twice to a synchronous altitude all the way back and forth across the atlanticocean and the united states twice. so, it worked.and i have to tell you, we were leaping around saying, "it works, it works!"if you have anything to do with software you know it's a miracle when it works. [laughter]so the chief internet evangelist not only
believes in miracles, but he relies on them.[laughter] so it was pretty exciting to get three differentnetworks inter-operating with the same set of protocols, because you could do almostanything to get two networks to interact by doing all kinds of conversions and whatnot--butthree or more was a big deal. so if you fast-forward to 1999, the internetlooked kind of like this-- bill cheswick, who at the time was--bell labs--didthis automatic mapping program to take the backbone routing tables from bgp protocoland show what the various autonomous systems were in the network and what their connectivitywas. so each color is a different autonomous system.if you look at 10 years later, in 2009, it
looks just like this.it's just bigger. there are more autonomous systems and morecolorful and a larger number of users. speaking of which, the statistics of the netare very interesting. there are 625 million machines that are visiblepublicly on the internet. and i emphasize "visible publicly," becauseover time, more and more machines have come on the net that we can't see in the publicdomain name system. and the reason for that is that people haveput up firewalls to isolate--enterprise systems or university systems and the like--so thatnot everyone can see and interact directly with all the machines on the net.but of the ones that we can see publicly,
there are more than 500 million of them.the number of users on the net is estimated at about 1 ã‚â½ billion--almost 1.6 billion.and the other phenomenon, which has taken place during the last 10 years, is an incrediblyrapid growth of mobiles. 3 ã‚â½ billion are estimated to be in use,and another billion may go into use this year. some of them will be replacements, and somewill be new additions. what is important to us at google and anyonewho is thinking of offering internet services is that an awful lot of people will firsthave their interaction with the internet through a mobile and not a laptop.and so anyone who is thinking seriously about offering internet services has to start thinkingabout the different avenues by which this
interaction will take place and the constraints.now, this is a blackberry. i have a g1 in my bag.this thing has a display the size of a 1928 television set and a keyboard that is suitablefor people that are 3 inches tall--and varying data rates, depending on where you happento be--so it is a very constrained environment, but a very important one for all of us whoare trying to treat, as well as we can, the full range of users on the net.speaking of which, where are they? this is another very interesting statistic.if you look, of course, you can see that asia is now the largest single grouping of userson the internet. that shouldn't be too big a surprise.more than half the world's population is what
we would call asia.and certainly china and india are part of that--malaysia, indonesia, and so on.but they are only at 17.1 percent penetration right now.europe is almost 400 million people, but i've given up making any projections about europebecause they keep adding countries. [laughter] so i don't know what to predict about europe.but they're at about 50 percent penetration. north america, which used to be the largestsingle grouping of users, is now at almost 75 percent penetration.there isn't going to be a lot of growth in north america with regard to absolute numberof users. so if you're thinking business on the internet,you really need to pay attention to these
numbers because the business growth is goingto come from asia and europe and some of the other parts of the world where the penetrationrates are as you see them on the right hand side of the slide.so this has implications for the languages that are needed, the styles and cultures ofinteractions, which vary from one country and one culture to another.if you don't pay attention to that, your business model may not work.and google has learned this very, very clearly. we've opened up engineering offices aroundthe world, in part, to take advantage of people's native knowledge of language and of stylesand customs. people don't interact the same way with searchengines everywhere in the world.
and there are some parts of the world wherethe google homepage is thought to be incomplete because there is almost nothing there andthere's this empty box. and it doesn't occur to them that they shouldtype something in it. they are expecting to see things to choosefrom. and so we've had to vary the appearance ofthe google homepage for some parts of the world in order to accommodate their expectations.if you look here--this is just some other statistical picture of the heavy penetrationof-- i'm sorry--the white penetration but heavyabsolute population in asia and some of why these are the penetration rates as you seeit.
the world on the average is about 23 or 24percent penetrated, which means that the chief internet evangelist at google has about 78percent of the world to convert still. so if you guys want to help, let me know.now, this is a very important chart, and there will be a final exam on this at 5 o'clock.this picture has only one really important graph on it.it's the thing that's going down. and that is the remaining available ipv4 addressspace that the internet assigned numbers authority can hand out to the regional internet registries.it's going to run out somewhere around 2010 or so.and i blushingly admit it's my fault. around 1977, when i was at darpa, about ayear's worth of debate had occurred among
the engineers helping to develop and testand implement the internet as to what the address space should be.one group wanted variable length addressing, and that went away very quickly because theprogrammers hated variable lengths, as it chewed up extra cycles to find the fieldsin the header. and they said that sucked, so that went away.so the only other options were 32 bits and 128 bits, and they couldn't come to any conclusion,so there've been a year of arguments. so finally i had to get this program moving.i said, "it's 32 bits, that's it. it ought to be enough to do an experiment."it's 4.3 billion terminations. i figured even the defense department didn'tneed more than 4.3 billion terminations to
do an experiment, right?now, the problem is the experiment never ended. so here we are, at 2009--we're going to runout. so in order to correct for that, there isipv6, and i hope that the university is preparing itself to implement v4 and v6 in parallel.i'm proud to tell you that google is doing that.we've spent the last 18 months--a little more than that--almost 21 months--implementingipv6, so most of our services are accessible on both the v4 and v6 platforms.the problem, however, is that the v6 environment is not evolving in the same way that the v4environment did. in ipv4 there was a connected core.there was the nsfnet backbone, the arpanet
backbone, and if you connected to any of them,you were implicitly connected to everyone else with the v4 protocol, because that'show it grew. in the ipv6 world, people were implementingit, but it's spotty. and so just because you implement ipv6 doesn'tmean you're necessarily connected to someone else who's implementing ipv6.yes, you can tunnel through the ipv4 backbone, but tunneling is a very fragile way of buildinga system. and so one of the policy arguments that ihad been putting forth is that isps should relax their interconnection and peer sharingpolicies for ipv6, because it's in their best interest to have a fully connected v6 backbone.ultimately, and in the long run, some of the
metrics that are used to decide whether youshould peer as opposed to buying transit will reemerge.but in the early stages of ipv6 deployment, i think it's smart for everybody to be asconnected as possible. 128 bits of address space gives you 340 times10 to the 36th unique addresses. that's a number only the congress can appreciate.[laughter] now, i used to go around saying that thatmeant that every electron in the universe could have it's own webpage if it wanted to,until i got an email from somebody at cal tech:"dear dr. cerf. you jerk, there are 10 to the 88th electrons in the universe, and you'reoff by 50 orders of magnitude." [laughter]
so i don't say that anymore.there are a few other features of ipv6 that are relevant.one of them is that if you want to go into ndm encrypted mode using the ip sect protocol,you're required to enter that mode, whereas before it's optional.and there are some other little features of the ipv6 address structure, including somethingcalled the flow id, which, i would say, has not been experimented around with very much.but anyway, those are some of the important features.the most important thing about ipv6, though, is that it just has a lot more address space,and it will allow the network to continue to grow.
[pause] i mentioned mobility before and the largenumber of mobiles. not all of them are internet enabled, butsome are. maybe 15-20 percent now, but that percentageis bound to go up over time as more and more people choose to use their mobiles as informationwindows or as devices that allow you to authorize payments or to get access to the internetfor its information value. one thing which i find an interesting potentialis not to treat the mobile as a device, which is the sole thing that you use to interactwith the net. i had mentioned earlier that it's a limitedresource ã¢â‚¬â€œ small display, small keyboard.
imagine, though, a mobile which has the abilityto detect other devices that are nearby. for example, if my mobile could detect thatthere are projection units up there and could interact with it, then the small display areaon the mobile could be replaced by something much bigger.or when you walk into a hotel room, if the large screen--flat screen--high res [sic]display were detectable to the mobile, it would be possible to display things that way.so what i'm thinking here is that mobiles should become more aware of their surroundings.we could invent protocols for that, we could use bluetooth, we could use 802.11, or 802.15.4--6lowpan--orsome of the other standards for allowing these devices to interact with each other.in fact, imagine sitting in a car that is
not yet internet enabled, but it has a localarea network--maybe it's got gps receiving and a gps display.you could imagine the mobile participating in that local area network and giving to thecar the ability to access the public internet, so suddenly the automobile becomes internetenabled. i think there are a lot of possibilities here.in fact, another one is that if you're like me, you have entertainment systems at home,and each one of the boxes in the entertainment system has a remote controller.and i usually wind up fumbling around trying to figure out which controller goes with whichbox, and when i finally figure that out, that's the one with a dead battery.so what i propose is that we get rid of all
those remotes, we internet-enable all of theentertainment equipment, put it on the house network, and then the mobile becomes a controller.now the interesting thing about choosing that architecture is that you don't have to bein the same room in order to interact with these devices.in fact, you could have a device on the internet--which you go to through a webpage--at which youdebate and discuss and negotiate what music and video you want to have on the entertainmentdevices, and then let that service figure out how to configure everything and get thematerials downloaded to your entertainment system.so you could be managing this thing from anywhere in the world.of course, so could everybody else.
and so it's pretty clear you need to do somethingabout strong authentication and strong access control, but that's a good thing.i mean, we want to see strong access control--strong authentication--become a part of the normalframework of the internet, because otherwise there are very significant vulnerabilitiesthat we can't protect against. so i am a big fan of forcing ourselves tointroduce very strong access control mechanisms throughout the internet, including these kindsof applications. what we have noticed is that many of the mobiles--youhave access to gps--or in some cases they can at least estimate where they are basedon triangulation and measuring the radio energy level among the various base stations thatthey might be interacting with.
so these devices know where they are, andthe consequence of that has been interesting to see.we watch people making queries. [cough] what we've noticed is that if they are carryinga mobile, they often make queries that are related to where they are.the consequence of that is that geographically indexed information has become increasinglyvaluable. so people who build databases that have informationabout where things are, what's going on there, what used to go on there, what might go onthere in the future--those kinds of information packages have turned out to be increasinglyvaluable. this thing is trying to tell me that i amsupposed to be lecturing now.
okay.now, i sort of intellectually understood the value of this geographically indexed information,but i didn't really viscerally understand it until my family went on a holiday.we went to lake powell in arizona. it's near a little town called page--arizona.as we were driving into page, arizona, we planned to rent a houseboat and go out onlake powell for a few days, and somebody pointed out that there weren't any grocery storeson the lake, and that we were going to have to get all of our provisions before we goton the houseboat. so we started talking about what kind of mealswe were going to prepare, and somebody said, "why don't we make paella?"and i remember thinking, oh, i love paella.
that's great, but you have to have saffronto make a good paella. where the hell am i going to find saffron in page, arizona?well, i was getting a good gprs signal, so i flipped out the blackberry, and i went tothe google homepage, and i typed saffron, page, arizona, grocery store.and i got back three choices with telephone numbers and a little map showing how to getto each one. so i clicked on one of the phone numbers,the phone rings, a voice answers, and i said, "hello, may i speak to the spice department,please?" now, this is probably a little store, andit's probably the owner of the store. "this is the spice department." [laughter]and i said, "do you have any saffron?"
and he said, "i don't know, but i'll go check."so he goes off and he comes back and he says, "yeah, i've got some saffron."so we followed the map to get to the store, and i ran in and i bought $12.99 worth ofsaffron--that's .06 ounces, in case you care. [laughter]and we made a great paella. but as i was walking out of the store, i realizedthat i had just, in real time, gotten exactly the information i needed when i needed it.i didn't get the answer, "you can get saffron new york city 1,500 miles away."so what was important to me is that this ability to carry your information window on your hipor in your purse and to get information that's useful right now is really stunningly valuable.and more and more people are discovering that
as time goes on.well, there are more and more devices showing up on the internet, some of which i neverin my wildest dreams imagined, like refrigerators or picture frames.i remember--somebody ran into my office about 10 years ago and said, "vint, vint, did yousee the internet-enabled picture frame?" and my first thought was, "boy that soundsabout as useful as an electric fork." [audience laughter]it turns out it's actually a very nice gadget, because you don't have to boot up windowsor log in or do anything. you just plug it in to the telephone system or an ethernetjack or some of them, i guess, have 802.11 wi-fi.and it just--every 24 hours--goes and logs
into a website which has been accumulatinguploaded imagery which you put on there from your digital cameras.so we have people around our family with these little automated picture frames--and we allhave digital cameras--so we upload pictures of the nieces and the nephews and the grandchildren.and you get up in the morning, and you get some sense for what everybody is doing, becauseit just cycles through the imagery. now, you can appreciate that if the websitethat these picture frames log into gets hacked, the grandparents may see pictures that theyhope are none of the grandchildren. so suddenly security and access control, onceagain, become an important element of the utility of some of these things.and that theme, i think, is going to recur
more and more as we rely on and make moreuse of this connectivity that the internet confers.there are things that look like telephones that are actually voice-over ip devices.of course, your laptop is doing skype and ichat and some of the other google talk thingsalready. and then there's this guy in the middle whoinvented internet-enabled surfboards. he's in the netherlands, and i guess one day--hemust have been out waiting on the water for the next wave--thinking, "you know, if i hada laptop in my surfboard, i could be surfing the internet while i'm waiting--" [laughter]so he built this laptop into his surfboard, and then he put a wi-fi server on the rescueshack, [laughter] and now he's got a product--an
internet-enabled surfboard.so my prediction is that there are going to be billions of devices on the net--more devicesthan there are people. and many of them you see when you walk intoa hotel room. you see web tv, and a little--radio connected--orir connected keyboard. everybody's pda--here anyway--is probablyinternet-enabled. video games are internet-enabled--people talkto each other while they're shooting at each other--makes for a great video conference.and they're washing machines-- ibm apparently partnered with a company calledmela (??), which makes very high-end washing machines for use in academic settings.students love it, right?
you throw your clothes in the washing machines,start it up, and then it sends you an sms when it is time to move the clothes into thedryer. and that's great.you go to the bar and have a beer and then your clothes tell you when it's time to comeand pick it up. it's very convenient.this internet-enabled refrigerator, when i heard about it, i wondered, "so what do youdo with an internet-enabled refrigerator?" and one thought is that it has a nice liquidcrystal display with a touch-sensitive thing, and it's a way--for americans anyway--to augmentthe family communication system, which typically consists of magnets and paper on the frontof the refrigerator.
now you can augment the family communicationswith blogs and webpages and emails and things of that sort--instant messaging.then i got to thinking about the possibility that you could put rfid chips on the thingsyou put inside the refrigerator so that the refrigerator could know what it has inside.so while you're at school or working, it's surfing the net, looking for recipes thatit could make, and when you come home, you see a nice list of things you could do fordinner. which i thought sounded pretty cool.then the japanese came along and built an internet-enabled bathroom scale.you step on the scale and it figures out which family member you are based on your weight,and it sends that information to the doctor
to become part of your medical record.it seems perfectly okay except for one problem--your refrigerator is on the same network.so you come home, and you see diet recipes coming up on the display, or maybe it justrefuses to open because it knows you're on a diet. [laughter]this is bad! now a lot of you may be in the computer scienceand electrical engineering field. now i have bad news to tell you that you can'tget the nobel prize. and the reason for that is that mr. nobelrefused to allow any branch of mathematics to be given a prize.then you could say, "well what about the economics prize?"and that doesn't use nobel's money--it's somebody
else's money.and it's usually like--john nash and the nash equilibrium--is clearly mathematical.but you don't get nobel prizes for anything in computer science.so i remember thinking, "well we have to do something about this."so i got to thinking about quantum theory and the fact that quantum particles have thisunique property that they can be in more than one state at the same time.then i got to thinking about wine, and it occurred to be that wine in a wine bottleis like a giant quantum particle, because it could be in multiple states at the sametime--it could be absolutely awful, or it could be absolutely spectacular or everythingin between.
but you don't know until you pull the cork.this is like schrodinger's cat. remember the verschrankung experiment, wherehe had the cat inside the box with a little capsule of cyanide and a little piece of radium?if the radium emitted an alpha particle and broke the capsule, the cyanide is released,and the cat would die. so you close the system up and you ask, "what'sthe state of the cat?" and the answer is you have to treat it asboth alive and dead until you open up the box to look inside.if there are any cat lovers in the audience, no cats were harmed. this is a verschrankungexperiment. so i thought about writing up my theory ofquantum wine bottles and sending it to the
nobel prize committee to see if i can getsome credit for that. somebody--well, i'll come back to some other stories about wine in a minute.i don't have time to go through all the rest of these, but sensory networks are turningout to be an important part of our environment on the internet more and more.i have an example of that. i have a commercial sensory network that'srunning ipv6 on 6lowpan, 802.15.4 radio net, and it's sampling every five minutes the temperatureand humidity and light levels in every room in the house and then delivering that to aserver, and the server accumulates that over time.now i had an engineering reason for doing
this.at the end of the year i wanted to be able to go to the engineering people who were lookingat my ventilation and air conditioning and heating system and say it was either too hotor too cold or look what the distribution was.and i didn't want anecdotal stuff. i wanted real data to show these guys, sothat's why i did it. but one of the rooms is the wine cellar, andit's very important that the wine cellar stay below 60ã‚âº fahrenheit and above about 50or 60 percent humidity. so it's been alarmed, and in case the winetemperature goes up above 60 degrees, i get an sms on my mobile.and that actually happened to me.
i was at argon national laboratory last year,and as i walked in the door, my mobile went off.it was the wine cellar calling [laughter]. "your wine is warming up."so every five minutes for the next three days i kept getting this little message sayingyour wine is getting warmer. my wife was away for two weeks on a holidayand couldn't reset the cooling system. so by the time i got back home, it was at70 degrees, which is not the end of the world, but not a good thing.so i called up the arch rock people and i said, "do you make actuators as well as sensors?"and they said yes, so there's a vacation project to go install the actuator system.another example why access control is important,
because i didn't want the 15 year-old nextdoor to turn my wine cooler off for me. now it gets interesting when you start thinkingabout what else could you do with this instrumentation. the fact that it detects light levels andreports every five minutes means that if somebody goes into the wine cellar and turns the lighton, i may be able to detect that because i'll see a big jump in the lumens in the measure.so that might mean i could tell if somebody has gotten into the wine cellar when i'm notthere. but it doesn't necessarily tell me that theytook any wine out. so the next step is to put rfid chips on thewine bottles. [laughter] that way i might be able to tell if any bottlesleave the wine cellar without my permission.
but somebody pointed out to me you could gointo the wine cellar and drink the wine and leave the bottle there. [laughter]okay, so this has got to get a little more elaborate.we got to put sensors in the cork that can tell whether there's any wine left in thebottle. [chuckles] and as long as we're doing that, we probablyought to start sampling the esters that make the wine tastes--the strawberry flavors andthe blueberry or blackberry and whatnot. so after a while, you get to the point whereyou have a fully instrumented wine cellar, and you interrogate the cork before you openthe bottle. and of course, if you discover that bottlereached 90 degrees fahrenheit because the
cooling system failed, you give that bottleto somebody who won't know the difference. [laughter]a very useful capability. you are going to see more and more sensornetworks becoming a part of the internet environment. part of the reason for this is that thereis an opportunity for you and me to get a better sense for how we use our energy resources.so google recently announced a power meter program that could allow you to put an instrumentin the house that could tell you about which devices are consuming how much electricity.we don't have a very good feedback loop right now.we sort of know what our electric bill is per month, but we don't know what made itup.
and so it's my sense, anyway, that feedbackabout how you're using energy--how efficiently or inefficiently--is a way of helping peopledecide how to be more careful and a bit more green about the energy that they use.so this year, 2009, is turning out to be probably one of the most dramatic years for the internetin its entire history. not only is ipv6 starting to roll out, finally--wehope before its absolutely needed--but the domain name system, which also has numbervulnerabilities, is starting to be modified so that you can get digital signed answersback when you make a query to translate a domain name into an ip address.today there's no guarantee that there hasn't been some interference--some modificationsand cash poisoning in the resolvers--that
give you the wrong address and send you toa fake site. but if you could get a digitally signed answerback which tells you that the integrity of that binding of the domain name, and the ipaddress has not changed since it was put in by the holder of that domain name, then you'dhave more confidence that the data you got back was accurate.so this is a hierarchical structure. each zone file either is digitally signedor points down to and provides signed records for the next level down.one big issue right now is who signs the root zone file of the domain name system, and thereare recent developments at the department of commerce which oversees both verisign andthe internet corporation for sign names and
numbers.and there's some continuing discussion about exactly who should sign the top level rootzone that hasn't been fully resolved yet. the other big change that's happening is thatfor many, many years domain names were mostly written in ascii characters--a very limitedset: 0-9, a-z, and a hyphen. but, there are--those statistics that i showed you showing people all over the world using the internet--formany of them--their languages are not naturally written in ascii characters.so there is great, understandable pressure to augment the domain name system with theability to write domain names in syrillic or arabic or hebrew or urdu or korean or japanese,and so on.
well to do that, the internet engineeringtask force has chosen to use a system called unicode, which encodes the glyphs of about100,000 different characters. the problem is that as soon as you introducesuch a broad range of symbols into the domain name expression, you run into the problemthat some symbols look the same. for example, in greek, latin and syrillic,a lot of the letters look very similar, and it's even worse when you start looking atsome of the more elaborate languages. like paypal, for example, could be writtenwith a syrillic "a," and the computer thinks it's different because a syrillic "a" is encodeddifferently from a latin "a." and so if you innocently click on somethingthat looks like paypal, you may end up at
the wrong site, which invites you to log inand--which you do--and then it says, "oh, there is a little problem," and it sends youover to the real paypal, and you log in again. meanwhile, it's draining the account usingyou username and password. so there are issues associated with the useof this extended character set, which requires some careful attention.i am presently chairing the working group, in fact, on what's called idnabis to try tofinalize what the specs are for which character sets can be used.we cannot guarantee safety here. it's just not possible.it's just like one and zero and "o" and lowercase "l" are confusable in ascii, and there arelots of other cases that we cannot find simple
rules to rule out.so we'll try to rule out as much as we can, like punctuation and things like that.but the registries are going to have to also suppress inappropriate use of domain names.you might say i'm not going to allow you to mix scripts inside of a label in a domainname as a way of resisting some forms of abuse. so that's a big set of changes for the internet--yeah--inthis year. i want to spend a little time here on cloudcomputing. and the reason that i want to is that it isan interesting paradigm that people are addressing. if you read any history of computing, youwill recall that in the 1960s there was a common notion called the computing utility.this is usually shown as a gigantic building
somewhere, with smoke coming out on the topand a huge computer inside, which everybody got to by way of the telephone system.well, 40 years later we have huge computing resources in big buildings with steam comingout the top, and everybody gets access to it through the internet.well--except that the thing inside the building is not a single mainframe--it's actually--inthe case of google anyway--it's literally a classified number of computers in each datacenter, and the data centers are interconnected with each other.the cloud has some very interesting features. one of them is that you get to dynamicallyallocate the resources of the cloud to computation, so as computation demand varies for each user,you get to expand and contract the available
resources for that particular user, ratherthan having a fixed load--fixed assignment--for tasking for each one of the machines.there are a lot of interesting side effects of trying to do computing this way.at google, for example, because we want you to be able to get access to your informationunder all circumstances, we actually replicate a lot of data at multiple data centers.the consequence of that is there's a huge amount of information flowing back and forthbetween the data centers, so we had to build a special, private network, basically, tolink all the data centers together. there are a lot of other interesting questionsabout how to make these data centers work well.moore's law is broken, as i think you've all
noticed.the problem is that the clock speeds are not going up, so the lazy programmers, who gotthe benefit of increasing clock speeds so that their algorithms just ran faster becausethe clock went up, don't have that benefit anymore.and the--let's say the alternative to moore's law--which instead of increasing clock speedis simply increasing total number of cycles available per chip--is achieving that goalby having multiple cores on each chip. but the clock speeds are not going up.the side effect of that is that if you have algorithms running that don't happen to beeasily parallelized, you have a problem. you have to go figure out how to take a serialalgorithm and parallelize it.
you can't even necessarily get a pipelineout of that multi-core chip. it depends on how the chip interconnectionsare done and what kind of access you have to the common bus.now, there's another problem associated with multi-core chips.it has to do with how much data you can push in and out of the chip itself.can you push data back and forth fast enough to keep all the cores running on a set ofproblems? those are issues that have not been fullyresolved, and for people who are looking around for dissertation topics, i can tell you thatlooking at cloud computing and multi-core chips and things of that kind could be a richterritory to do that.
finally, there's this question of--well, actually,let me go one more here. all right.so now, i want you to think back for a minute about what the world was like around 1969when the arpanet was being built. there were networks out there that alreadyexisted. it was not the first computer network.but most of those nets were proprietary, so ibm had sna, digital equipment corporationhad decnet, hewlett-packard had ds, which i think stood for distributed systems.they didn't either [sic] work with each other. occasionally, people would build gadgets.they would let you translate back and forth, but it was not a uniform system.so they were all proprietary, and, in fact,
those networks didn't know that there existedany other nets. they couldn't even express the idea of gofrom this net to that net, particularly from the sna network to a decnet.so the internet was designed to overcome the problem of expressing the idea of moving datafrom one net to another. people are going to build clouds for verygood economic reasons--but they're going to build multiple clouds.the clouds don't have any vocabulary right now for referring to another cloud.we're back in the 1960s and the days when networks didn't know about each other.so i believe that there are some really interesting dissertation topics waiting to be writtento talk about inter-cloud interactions.
example, i've got data sitting in cloud a,and let's even imagine the data is protected--it's access-controlled.there's meta data associated with it that cloud a is responsible for controlling accessto. and you decide you want to either replicatethe data in another cloud or move it to another cloud.so the first problem is how do i say that? the first cloud needs to have a way of sayingsend this to this other cloud. the second problem is how do i convey themeta information that controls access to the data as it goes into cloud b?is that--does that mean i should stop or--? [laughter]or is that just--?
oh, you leaned against the light switch.you have a free _____ ?? butt. [laughter] okay. [laughter]so anyway, the problem here is that we don't have vocabulary for talking about inter-cloudinteraction. so if you're looking for an interesting areato research, you're at the same stage in the cloud world as we were in the internet worldin the early 1970s. and i believe that there's some really interestingtechnical problems and some interesting design problems and invention opportunities thatlie ahead. let's see.gib, i want to make sure that there's some time for a q and a here, so i want to be careful.i think i've already hinted at some of the
access control issues here, but remember imentioned strong authentication repeatedly. we tend not to use authentication very well.a lot of us still use re-useable passwords, which is really a bad idea.if you've ever dealt with online services, and you try to log in, and it doesn't work,and then it says, "if you forgot your password, click here," and then it asks you these secretquestions. well, as some of you will remember, governorpalin had a little problem because the secret answers to the secret questions were discoverableon the internet, so they weren't really very secure.in fact, banks and securities companies often suggest that you should have these secretquestions that nobody else but you should
know the answer to.i believe if you're going to use this kind of technique--i'm not a big fan of it--butif you use that technique, you ought to have the secret questions, and you ought to havesecret answers, but you better have the secret answer plus random numbers that is not goingto be easily guessed or found on the internet by someone else.i would prefer to see strong authentication mechanisms where you'll use non-reusable passwords.public-key crypto gives you an opportunity for some kinds of things like that.i even like the idea of having a device that has multiple keys in it, which can be yourproxy to represent yourself in a variety of different systems.and during the public-key exchanges and the
like, this little device could plug into yourlaptop, and when it needs--when the application needs--a new strongly identified identifier,you let that little device generate the appropriate keys and send the data back and forth.you need to access control that little device so that somebody can't pick it up and becomeyou in the virtual world. you might do that with a fingerprint reader.we aren't quite at the point where we can put an iris reader in these little devices.although, this macintosh here as a little camera that's mounted at exactly the rightplace to do an iris scan, and you can do that from two or three feet away.now, whether this has enough resolution to do it, i don't know.maybe not.
but the notion of being able to do iris scansis an attractive way of authenticating. it's better than doing a retinal scan, whereyou have to get right up close to the thing or sticking your finger into a hole somewhereand hoping that it will be there when you pull it out again. [laughter]so my sense is that we should be able to build strong authentication devices that allow youto authenticate yourself based on what you know and something that you have.and if you lose it, that someone else isn't able to abuse it by representing themselvesas you. so i've actually touched on most of thesepoints here, but the thing i want to emphasize is that i honestly believe there's a lot ofopportunity for people to explore issues associated
with cloud computing that haven't been analyzedyet. let me skip over that.oh, there's another interesting situation here.it's one thing--you just move data back and forth between clouds--and something else ifyou wanted to get multiple clouds to actually cooperate with each other.i'm imagining sitting here with my mobile. it becomes my little cloudlet, right?it's a little access controller. one question is if it's connected to two clouds,does it become an inadvertent channel between the two clouds that was unintentional?this is sometimes known as a covert channel, which was unintended.or can i use it as a controller and essentially
cause things to happen in between the twoclouds? you need a vocabulary for that, includingstrong authentication and other things associated with it.so this list of questions, i hope, will stimulate some of your thinking about research.let me switch gears for just a second and talk a bit about semantic web.now, i'm no authority here. tim berners-lee would be a good guy to bringhere, if you can persuade him to come down from mit to talk about that.and i have to admit i was very skeptical about this whole notion of semantic web, and i stillhave some skepticism, but i think i understood an idea that tim had that i didn't understandbefore.
let me illustrate it by--i may not use the right vocabulary, but let me try anyway.imagine for a moment that we have a way of expressing in our web pages the semantic ambiguitiesthat we're faced with. so let's take the word "jaguar."we know it could mean the animal, could mean the car, could mean the operating system.suppose that you are creating a web page, and you notice that there's this ambiguity,so you actually write into the web page something that says "jaguar" is ambiguous.here is a web page where it's used as the car, here's where it's used as the animal,and here's where it's used as the operating system.and you just imbed that in your web page.
along comes the google crawler, the yahoocrawler, the microsoft crawler, and it stumbles over this piece of semantic information.and it says, "oh, that's an interesting fact. i didn't know that."so i incorporate that into my knowledge base. now someone types in a query with the word"jaguar," and the search engine, which happened to have stumbled over the semantic hit, says,"well did you mean the car or the animal or the operating system?"suddenly we are--you and i--are allowed to contribute semantic knowledge into the system,which is regularized by having a standard way of expressing it.so i get more excited about the fact that the people might be able to contribute tothe growth of semantic knowledge in the internet.
we've all contributed, in one way or another,to the content of the net--increasingly so. and at google--at youtube--we're getting 15hours of video per minute being uploaded. now, who could argue over the value of the15 hours of video, but the idea that people are generating a lot of information and puttingit into the net is indisputable. second thing i'm worried about, every timeyou use a program to produce a file--for example, it's a word file or it's a microsoft spreadsheetor power point or any of the other things that we commonly use--maybe it's a photographprogram or an emulation or simulation--these are complex objects.these complex digital objects are not understandable or interpretable unless you have the softwarethat knows what the format means and how to
interpret it.if we ever loose the software that knows how to interpret the files, all we will have isa pile of rotten bytes, because they won't be meaningful.and what i'm worried about, quite honestly, is that we don't have a regular process ofpreserving enough information to assure that our complex digital files will be meaningfulover a period of time. and here i'm not talking about five yearsor 10 years. i'm talking about hundreds of years.now, i have a picture over here, which i--you may or may not be able to see this very well.this is a manuscript that was written in 1200 a.d., and it's still readable today--800 yearslater.
when you pull out a polycarbonate cd-rom,and somebody says how long is this going to last, you'll be damn lucky if it lasts 10to 15 years. and even if the cd-rom lasts as a physicalmedium, will the software that knows how to interpret the bytes on that cd-rom last?and some people have pooh-poohed this as an irrelevant issue.of course, people will translate the important stuff into the new versions of software, andyou shouldn't worry about it. and the stuff that didn't get translated wasn'timportant anyway. that statement was made in the presence ofabout 50 librarians. it took us half an hour to get the librariansoff the ceiling, because they pointed out
that sometimes the importance of informationisn't known for 100 or 200 years. so i am very concerned that we have a practiceand a process for preserving the software that knows how to interpret these complexdigital files. there is an intellectual property issue hidingin all of this. suppose that you're company a, and you decidethat for good and sundry business reasons you're not going to support this piece ofsoftware anymore. and so you stop.and the next version of something that you make isn't compatible with it.but we have hundreds of gazillions of bytes of data that's based on this thing that you'renot supporting anymore.
so we might come to you and say, "well, you'renot supporting it. will you put that up on the net--put the sourcecode up there?" and you'll say, "no, of course i won't dothat, because i'm using a lot of that source code for another product."so we might say, "well, would you let us run that program so that it's accessible on thenet remotely?" and you may or may not agree to do that.but then you might point out this thing only works on that operating system version, andso you better get the operating system up there too.and say, "oh, we have to go talk to that guy, right?"okay.
so now we have to go talk to you about youroperating system, and you say, "well, actually, i'm using a lot of that code over again in--youcan't have the source code." so we have a question, in my mind, is howdo you create the incentives for preserving the ability to interpret these data files?and worse you may have to emulate the machine that the operating system ran on that theapplication ran on that knows how to interpret the bytes.so this is not a solved problem. and some people will say, "well, open sourcewill solve the problem." i would argue that--not clear that open sourcewill solve the problem. and it certainly isn't clear that everyonewill continue to evolve the open source to
map into--newer versions mapping into olderversions of software. so byte rot is a big issue.all right. final report here is on the interplanetaryextension of the internet. this is not a google project.i get time from google to work on it, but i don't want you to run out of the room sayingyou just figured out that google's business model is to take over the solar system.that's not what this is about. [laughter] basically, this project got started around1998. some of us--when i was actually at mci atthe time--and i met with some people at the jet propulsion laboratory, and we were talkingabout exploring space with man and robotic
systems using the deep space network, whichhas three big 70-meter dishes in madrid, spain; canberra, australia; and goldstone, californiatalking to spacecraft that are in orbit around the planets or flying past the asteroids orlanding on the surface of mars. what you see on the top portion of the screenthere are two of the four mars orbiters--mars express and mars reconnaissance observatory.and on the bottom part you see the rovers on the left--there are two still operatingon mars--and the phoenix lander, which landed and ceased operation after about a month anda half or so at the north pole of mars. what you might not know, though, is that onthe rovers, the long distance radio--the one that was supposed to report results all theway back to earth directly--didn't work.
i mean, it did, but it overheated, and theyhad to reduce the duty cycle. so somebody--first the scientists were upset because the data rate of that radio was only 28.5 kilobytesa second as it was, and so reducing the data rate meant less data came back.so they were all very upset. the engineers at jpl said, "well, we haveanother x-band radio. it's on the rover, and it's on the orbiters,but it can--" and it goes 128 kilobytes a second, but itcan only reach orbital altitude. it can't go all the way back to earth.so they reprogrammed the rovers and the orbiters so that they would pull the data up from therover and hold on to it until they got to
the right place in their orbit, then transmitthe data back directly to the deep space network at 128 kilobytes a second because the orbiterhad better power supplies if it wasn't down on the dusty surface.well, speaking of that, the rovers lasted a lot longer than they expected.the original mission was, like, 90 days. it's now over five years.and one of the reasons it's continued to work is that the solar panels have been cleaner--remainedcleaner--than we expected. we thought they were going to get very dustyvery quickly and then stop converting sunlight to power.and eventually, the batteries wouldn't charge up enough.personally, i think there's somebody up there
dusting them off, [laughter] though we haven'tgot them on the video camera. so the real answer, it turns out, is thereare little dust storms on the surface of mars, and they blow the dust off the solar panels.you can actually see it in the--when you're down in the operation station, and a littledust storm comes along, you watch the voltage level go up coming from the solar panels.so anyway, we started talking about how to improve the networking capacity--not justthe data rate--but the richness of networking that could be done.i'm going to skip over google mars, except to tell you go to google or switch to googlemars. the imagery is spectacular--the three-dimensionalviews of where the rovers have been--all stitched
together.so you can search around there and see what it looked like.we started out saying, "why don't we build an interplanetary internet?"and we thought why don't we use tcp/ip to do that?it seemed to work okay on earth. it ought to work on mars.and it does. the problem is it doesn't work between theplanets. and the reason is obvious, right?the planets have a-- well, the distance between the planets isliterally astronomical. [laughter] and at the speed of light, the distance betweenearth and mars varies from 35 million to 235
million miles, which translates into 3 ã‚â½minutes to 20 minutes one way. so try to do flow control with tcp with a40-minute, round-trip time. it's a real simple flow control algorithm,right? you say, "i ran out of room.stop!" and if the other guy hears you in about 20or 30 or 60 or 100 milliseconds, that's okay. but what if it's 20 minutes later, and allthis stuff is coming at you full speed, packets are falling on the floor.flow control doesn't work. and there's the other problem.it's called celestial motion. planets rotate, and we haven't figured outhow to stop that. [laughter]
so the problem here is that you're talkingto something on the surface, and if the thing rotates, and you can't talk to it anymoreuntil it comes back around--same problem with the orbiters--it's a very disrupted system,and the delays are: a. inescapable and b. variable.this is not a happy environment for tcp/ip. so we ended up inventing a new set of protocolscalled delay-and-disruption-tolerant networking. and--to make a long story short so we canhave some q and a--we have tested it on the surface of the earth in a lot of differentconfigurations. but we just started the deep space testinglast november. nasa gave us permission to upload our newprotocols to the deep-impact spacecraft, which
had gone up to rendezvous with a comet andlaunched a probe in it. the probe is gone, but the platform is stillin orbit around the sun. it's a very eccentric orbit.so it was on its way back towards earth. we uploaded it in october, and we spent amonth transmitting data back and forth between earth and that spacecraft--didn't drop a byte.i mean, it was a really impressive, even when we had power failures and other kinds of thingsthat were not planned. we tested it.that way, we are going to upload to the international space station this summer, and we're goingto re-load the protocols on board the--what will be re-named--epoxy platform--same one--it'sjust going to a different comet--we're going
to upload the protocols in the fall.so we'll have a 3-node, interplanetary test going on before the end of this year.if those are all successful, then nasa will award us what's called technology readinesslevel 8, which means this technology is ready to deploy in live systems.we've already started talking to the consultative committee on space data systems about standardizingusing the dtm protocols for all the space missions that are launched by the differentspace faring countries. the whole idea here is--and what we hope,frankly--let me just skip through this part--what we hope is that over time, every time a newmission gets launched and then completes its primary responsibilities, it can be convertedinto part of an interplanetary backbone network,
because everything will be compatible.just like when you plug into the internet today, you can talk to the 600 million machinesthat are out there. so we're not trying to build an interplanetarynetwork and hope somebody comes. that's not what we're trying to do.we want to build the capability to allow it to emerge from all the missions that are launchedfor scientific reasons. okay.so that's up to the minute on interplanetary internet.let me stop here and ask if you have questions. [applause]thank you. we have a question here.yes sir.
>> cardiologists--[inaudible] >>cerfabsolutely. and, in fact, one of the things that we wouldlike to do is to get the dtm protocols into mobiles so that we have resilience in thepresence of noise and disconnectivity and everything else, unless you were thinkingof real time transmission. yes.well, if you want to have real time transmission, then you need to have better connectivitythan you get with a mobile. but the point is well-taken.you'd like to do it so you don't have to be tied to the hospital bed.okay.
>> ekgs--[inaudible] >>cerf: i'm sorry? >> ekgs-- >>cerf: ekgs, for example.yes. well--[inaudible]when you're in the emergency vehicles, they'll transmit ekg information back to the emergencyroom. but it would be nice to do that from home,for example, and to be able to get more other kinds of sensors about the state of health. >> [inaudible]
>>cerf: say it again. >> blood sugars. >>cerf: blood sugars. oh, there are a wholeseries of-- >> [inaudible]--possibilities ??-- >>cerf: yeah.there are a whole bunch of things that one would _____ ??that's true. >> there's one other--real quick ?? >>cerf: yeah. >> the only time i've ever emailed you inmy life was--[inaudible]--guys missed an opportunity.
>>cerf: yes. the only one? >> [inaudible]--a very important one. wayback when e-mail wasn't--[inaudible]--calling up other computers on the long distance phonelines at medifax ?? >>cerf: yes. >> back when you and i were grad studentsat ucla. and so it happens that they could never figureout a way of charging for the e-mail because so many people were--[inaudible] >>cerf: well-- >> and then the arpanet had a chance to chargefor e-mail, and if you only had--[laughter]
>>cerf: hey-- no-- actually, let me tell yousomething. i did try to charge for e-mail because i don't_____ ?? mail. if it was a commercial system, i charge adollar for each e-mail message, and it got paid for in 1983.then i made the mistake of connecting it to the internet in 1989, and everybody said,"hey, it's free over there. how come you're charging for it?"and, of course, now here we are. nobody pays for e-mail.what people pay for now is to not deliver e-mail, right? [laughter]it's spam. yes, sir.
>> thanks for a delightful, informative talk.i went to see-- you're focus has been largely on the technologyand devices. >> but arguably, the biggest change in thelast two years is the social aspects of newly generated content.you mentioned youtube, but it's phenomena-- it makes it number three on the web.and twitter has 14 million users, and the number of social applications is--seems arguablythe largest change. how is that affecting the internet? >>cerf: well, it certainly affects in oneway. people use it more and more for various things,and they use it in real time.
twitter is not necessarily--i've seen it create the flash crowd phenomenon. if any of you read david brin, you know aboutflash crowds. and what twitter and instant messaging dois create the moral equivalent of mobs in the virtual space.i'm quite serious about that. if you know anything about mob behavior, youknow how people will assume things and then act on them without really understanding what'sgoing on, because they just see everybody else doing something.we should actually be attentive to the social effects of some of these online environments.now, i just came back from yale university yesterday, where we debated in the yale politicalunion the question whether the virtual social
environments were real or not. yeah.and it was an interesting, very lively debate. the people who believe that the social networkingis as real as the face-to-face interactions--they're different--but they involved real people,real feelings, real concerns--you are--can be as swayed by an online interaction as youcan by a face-to-face interaction. i think we--the vote came 43 to 7 or something like that. there were a few diehards who believe thatthe virtual environment wasn't real. but for most of us, it has a lot of realityto it. it also has some negative form to it. >> [inaudible]--and terrorists--[inaudible]--areequally efficient and empowered by such social
technologies. >>cerf: that's right.although, you know what esther dyson says--i think she's right about--she says that the antidote for bad information is not suppression but more information.but it does put an obligation on each one of us who uses these systems to distinguishgood quality information from bad quality information.so suddenly critical thinking is a very important piece.it always was, right? you could get bad information from radio,television, newspapers, magazines, or friends and your parents.and you had to distinguish what was useful
and what information you're going to use.the internet just sort of highlights that even more because of the huge quantity ofinformation and misinformation that's available. >> we have a question, maybe, further back-- >>cerf: way in the back. okay.so i don't know if i'll be able to hear you from there, because i'm hearing impaired. >> i'll try to shout.so one of the things we basically know now is about net neutrality, right, and the-- >> so my question is do you think that there'sany room in the future of ip, in any conceivable timeframe, to maybe re-look at things like--[inaudible]--preservationsite--[inaudible]--kind of make the discussion
more of a technical one, rather than--[inaudible]--apolitical one. >>cerf: okay. so net neutrality has been abig debate, as you know--and especially here in this part of the world.in fact, time warner just did a little unscheduled experiment [laughter] on that.it didn't work out too well for them. let me start out by saying that i adoptedthe phrase "nondiscriminatory access" to the internet as an alternative to net neutrality.and the only reason for that is that the-- the term net neutrality has been so badlydistorted that people who were against it were claiming that those who were for it believedthat every package should be treated exactly the same way.there should be no priority, you couldn't
charge anybody more for using more capacity,and that's not true. my view is that you want to just assure thateach user is able to reach every supplier of service--or every other user on p2p--fairlyand equitably. if you use more capacity, more bandwidth thati do, it's not unreasonable for the isp to charge you more.but the metrics should not be how many bytes did you transmit, but at what rate did youtransmit them, because the resource crunch is not the total number of bytes moved.it's how quickly you move the bytes. so if i move a terabyte in a month, that'snot nearly the stress on the net as moving a terabyte in two seconds.and so i don't mind being told if you want
higher data rate, that you have to pay more.so that's a tiered structure which i think is reasonable.i don't like the idea of an isp saying to an application provider, "we're not goingto allow your traffic to go over the high-speed lanes unless you pay us."i think it's the-- first of all, i'm already paying to get onto the net in the first place with whatever capacity i can afford and believe is needed.the consumer is the one that wants to be able to say, "if i buy broadband network access,i want to use all of that capacity to get to that service over there, and you can'tinterfere--or shouldn't interfere with that." so first of all, the metric should be datarate, not total data transferred.
and second, i think we should be alert toanti-competitive abuses. but i do accept the idea that you can havetiered pricing, and that you can have--pre-packaged differently--some of them may need low delayor low latency. and some of them are going to be--controltraffic--which you clearly don't want to inhibit. but when the system becomes congested, i thinkit's fair to allocate the capacity in accordance with what the users are willing to pay.i also think users would like to have fixed pricing.so i would much rather know what my cost is going to be for the month and have my trafficrate capped, rather than having a surprise bill at the end of the month.and if i don't like the service i'm getting,
then i should be able to buy more if they'rewilling sell more. the other thing i think is important is transparencyfrom the supplier. the consumer should know what they're actuallygetting, and they should be able to measure that.so we should have tools that let people validate that they're getting what their contract saysthey should get. okay.is that--? that may be all the time we've got. >> i think that there's a reception--that was the last question--[inaudible]--reception outside.you're welcome to--[inaudible]
>>cerf: i'll be happy to chat as long as myvoice holds up >> [inaudible]--we have this plaque to commemorateyour lecture in this ?? university. >>cerf: thank you. >> [inaudible]--cover on that and get thatback in there--[inaudible] [applause] >>cerf: all right. i appreciate that. [applause]