Internet Measurement Conference 2012: by Hamed Haddadi

IMC 2012. http://www-net.cs.umass.edu/imc2012/program.htm

Paper 1: Using CAIDA telescope to collect scans of sip services, the scans have a sophisticated pattern which survives even when there are a low number of hosts taking part in the bot. The paper is a good demonstration of solid measurement work like many other IMC paper
Also Amazing animation (CAIDA cuttlefish )

Paper 2: prefix hijacking detection has been attracting a lot of attention lately. They form a live fingerprint of the route update distribution patterns and identify and classify hijacks, failure and route anomalies using threshold techniques and distributed “eyes” , with less than 10second delay.

Paper4: looking at one way traffic on the net, which can shed light on a large number of anomalies, they use a large netflow dataset for analysis of these packets (which never receive a reply). Interestingly, over 7 years of their data, a significant portion of flows is one-way 30–70% but a very low volume of traffic as they are usually small packets.

Paper3: the paper focuses on concurrent prefix hijacks, where an AS hijacks prefixes of a number of other ASes. These are becoming trendy as full table leaks are difficult and detected faster. It is also a big task to remove individual valid changes in AS prefixes. There are a number of interesting case studies in the paper.

Paper2 morning session , fast classification at wire speed with commodity hardware , the paper has an interesting analysis of pros and cons of speed vs accuracy , number of cores, amount of memory, they have used synthetic and real traces from CAIDA. The optimal classification can be done when there is one core dedicated per queue.

1. Fathom: A Browser-based Network Measurement Platform (review)
Mohan Dhawan (Rutgers University), Justin Samuel (UC Berkeley), Renata Teixeira (CNRS & UPMC), Christian Kreibich, Mark Allman, and Nicholas Weaver (ICSI), and Vern Paxson (ICSI & UC Berkeley)
Interesting measurement methodology using Firefox extension

http://www-net.cs.umass.edu/imc2012/papers/p87.pdf

Transition to ipv6, they have used javascript websites and flash googled flash ads to try out number of observed networks which have ipv6 enabled , though using these has introduced very interesting biases towards Asian and Latin American countries. They notice that no one is taking action for adoption. A high proportion of 6to4 tunnelling is seen, and corporate networks seem to be leading the way in adoption. The findings indicate a delay for Teredo hence Microsoft hasn’t enabled it by default http://en.wikipedia.org/wiki/Teredo_tunneling
The sampling technique has been very interesting in the paper.

3. MAPLE: A Scalable Architecture for Maintaining Packet Latency Measurements (review)
Myungjin Lee (Purdue University), Nick Duffield (AT&T Labs-Research), and Ramana Rao Kompella (Purdue University)
Another tool paper, specific for latency measurements, moves to per packet granularity to obtain measurements of latency at packet level rather than flow level. They use time stamped packets to keep track of packets using hash tables and a variant of bloom filters for efficiency.

4. Can you GET me now? Estimating the Time-to-First-Byte of HTTP transactions with Passive Measurements (review) (short paper)
Emir Halepovic, Jeffrey Pang, and Oliver Spatscheck (AT&T Labs-Research)
Motivation is to measure user experienced delay, using passive analysis for convenience and representativeness, defining ttfb as time between sun ack and first byte of http data. They show ttfb captures user experience better than rtt.

5. Towards Geolocation of Millions of IP Addresses (review) (short paper)
Zi Hu, John Heidemann, and Yuri Pradkin
Improvements to popular maxmind geoloc system, in an open geoloc database format for all address. They use a vantage point system to triangulate IP address locations. Accuracy is preserved by choosing a number of vantage points.

1. Evolution of Social-Attribute Networks: Measurements, Modeling, and Implications using Google+ (review)
Neil Zhenqiang Gong (EECS, UC Berkeley), Wenchang Xu (CS, Tinghua University), Ling Huang (Intel Lab), Prateek Mittal (EECS, UC Berkeley), Vyas Sekar (Intel Lab), and Emil Stefanov and Dawn Song (EECS, UC Berkeley)
First large scale study of an OSN evolution. Breadth first search crawling, differentiating between followers and followee graphs, they design a new model base don the observation that google plus has a large number of low degree nodes, with a log-normal distribution. It’s makes google plus a hybrid between Facebook and twitter. They also look at Triassic closure model and find them better then preferential attachment.
Surprised why they didn’t check the correlation between number of posts and degree of nodes, also maybe attribute such as LinkedIn endorsed skills play a role in this relationship.

2. Evolution of a Location-based Online Social Network: Analysis and Models (review)
Miltiadis Allamanis, Salvatore Scellato, and Cecilia Mascolo (University of Cambridge)
Looking at spatial and location based social networks. Using daily snapshots of gowalla social networks looking at check ins of 122k users, they explore global attachment models such as preferential attachment model, age model , distance model and the gravity model. 30% of new edges are between users that have one check in in common.

3. New Kid on the Block: Exploring the Google+ Social Graph (review)
Gabriel Magno and Giovanni Comarela (Federal University of Minas Gerais), Diego Saez-Trumper (Universitat Pompeu Fabra), Meeyoung Cha (Korea Advanced Institute of Science and Technology), and Virgilio Almeida (Federal University of Minas Gerais)
Another google+ paper, looking at information sharing and privacy settings in google plus, some users put private data such as home and mobile numbers, though own users are known to be more risk taking. A bunch of other metrics are also discussed, however the type of users are not discussed. Also they data shows the strong geographical correlation of friendship between users, showing that offline relationship is also reflected in the data. I imagine The data may have strong errors obviously, as some users put premium number in the phone field to collect money :)

4. Multi-scale Dynamics in a Massive Online Social Network (review)
Xiaohan Zhao (UC Santa Barbara), Alessandra Sala (Bell Labs, Ireland), Christo Wilson (UC Santa Barbara), Xiao Wang (Renren Inc.), Sabrina Gaito (Università degli Studi di Milano), and Haitao Zheng and Ben Y. Zhao (UC Santa Barbara)
Looking at volition of user activity and growth of network, using Chinese Facebook equivalent , capturing node and edge dynamics over 2 years. Network growth and effect of aGe of nodes and preferential attachment, and how do these change as network matures. They also look at community formation and theirs lifetime and similarity using set intersection and jacquard coefficient. Driving force behind edge creation shifts from new nodes to old nodes as network grows, preferential attachment strength also decays.

Day 2

8:30-10:15 Video On Demand. Session Chair: Mark Allman (ICSI)

1. Watching Video from Everywhere: a Study of the PPTV Mobile VoD System (review)

Zhenyu Li, Jiali Lin, Marc-Ismael Akodjenou-Jeannin, and Gaogang Xie (ICT, CAS), Mohamed Ali Kaafar (INRIA), and Yun Jin and Gang Peng (PPlive)
dataset from smartphone video videos of 4m users, watching 400k videos over two weeks, the results can be a good guide for those designing wireless provisioning, the trends of watching long videos versus short videos are displayed against time of day which is interesting. 3G users are more likely to wathc movies but they often give up at the beginning.

2. Program popularity and viewer behaviour in a large TV on demand system (review)

Henrik Abrahamsson (SICS) and Mattias Nordmark (TeliaSonera)

Looking at TV & video on-demand access patterns, the usual heavy tail and top 100 popularity trends can be seen. they find the cacheability very high so with 5% top videos cacheing, the hit rate increases 50%.

Video Stream Quality Impacts Viewer Behavior: Inferring Causality using Quasi-Experimental Designs (review)

S. Shanmuga Krishnan (Akamai Technologies) and Ramesh K. Sitaraman (University of Massachusetts, Amherst and Akamai Technologies)

nice introduction to video delivery economics, to improve user behaviour and performance. The performance aspects is understood, but the improved “user behaviour” is not clear. a LArge dataset of video views is presented. Using randomised experiments (Fisher 1937) they look at correlation vs causation of different factors such as geography and content times by treating users differently for example for re-buffering of videos and its effect on video abandonment. Patience is increased with the length of the videos. So short video clips are abandoned fast if they are slow to load. The mobile users are more patient than fiber users so access technology also plays a role.

Confused, Timid, and Unstable: Picking a Video Streaming Rate is Hard (review)

Te-Yuan Huang, Nikhil Handigol, Brandon Heller, Nick McKeown, and Ramesh Johari (Stanford University)

The performance of video rate over http/tcp is analysed. the competing flows makes the video rate to go too low which takes it below acceptable value. The on-off traffic pattern due to buffering heavily effects the congestion window management of TPC due to slow-start and hence bandwidth underestimation. This is due to video-client trying to do TCP’s job and estimating b/w. perhaps a video-specific protocol is needed??

On the Incompleteness of the AS-level graph: a Novel Methodology for BGP Route Collector Placement (review)
Enrico Gregori (IIT-CNR), Alessandro Improta (University of Pisa / IIT-CNR), Luciano Lenzini (University of Pisa), Lorenzo Rossi (IIT-CNR), and Luca Sani (IMT Lucca)

The paper shows the geographic distribution of feeders and and their coverage of AS topology dataset. they increase the accuracy by more route collectors, I believe (and Walter Willinger also mentioned) the work can heavily improve by using IXP data.

Quantifying Violations of Destination-based Forwarding on the Internet (review) (short paper)

Tobias Flach, Ethan Katz-Bassett, and Ramesh Govindan (University of Southern California)

Using reverse traceroute for finding destination-based forwarding violations, e.g., by MPLS tunnels or load balancing, using planetlab nodes and destinations with spoofed packets along paths. large portion of violations are caused by load balancing. for 29% of the targeted routers, the router forwards traffic going to a single destination via different next hops, and 1.3% of the routers even select next hops in differ- ent ASes.

Revisiting Broadband Performance (review)

Igor Canadi and Paul Barford (University of Wisconsin) and Joel Sommers (Colgate University)

growing interest in broadband subscription and FCC interest in investigation broadband speeds and rates, they use Ookla data which is a flash-based performance testing application, with over 700 server locations. the paper uses 59 metro areas across the world for segmenting the areas based on geographic diversity. the comparison of data is made against SamKnows data. Some ISPs are seen to be rate-limiting users to very low speeds,

Obtaining In-Context Measurements of Cellular Network Performance (review)

Aaron Gember and Aditya Akella (University of Wisconsin-Madison) and Jeffrey Pang, Alexander Varshavsky, and Ramon Caceres (AT&T Labs-Research)

checking performance of user devices for different conditions, crowd sourcing using 12 volunteers to measure performance of cellular networks, using speed test websites for looking at latency and loss over different hours of day, they look at different situations and positions of the phone however different data delivery types can affect this result quite heavily.

Cell vs. WiFi: On the Performance of Metro Area Mobile Connections (review)

Joel Sommers (Colgate University) and Paul Barford (University of Wisconsin)

another mobile performance measurement and speed test crowd source data collection, also from native apps on smart phones. iOS devices show more latency compared to android devices, perhaps due to poor OS or API design. they find performance of wifi better but cellular is more consistent

Network Performance of Smart Mobile Handhelds in a University Campus WiFi Network (review)

Xian Chen and Ruofan Jin (University of Connecticut), Kyoungwon Suh (Illinois State University), and Bing Wang and Wei Wei (University of Connecticut)

an interesting paper comparing CDN performance between Akamai and Google on campus

1. Breaking for Commercials: Characterizing Mobile Advertising (review)

Narseo Vallina-Rodriguez and Jay Shah (University of Cambridge), Alessandro Finamore (Politecnico di Torino), Hamed Haddadi (Queen Mary, University of London), Yan Grunenberger and Konstantina Papagiannaki (Telefonica Research), and Jon Crowcroft (University of Cambridge)

BEST paper! read it fully! :)

Screen-Off Traffic Characterization and Optimization in 3G/4G Networks (review)(short paper)

Junxian Huang, Feng Qian, and Z. Morley Mao (University of Michigan) and Subhabrata Sen and Oliver Spatscheck (AT&T Labs-Research)

collecting data from 20 volunteers on android for 5 months, looking at screen status at 1Hz. screen off traffic consumes half of energy on network interface because applications download less and traffic pattern changes, screen-aware fast dormancy increases energy saving by 15%.

Configuring DHCP Leases in the Smartphone Era (review) (short paper)

Ioannis Papapanagiotou (North Carolina State University) and Erich M Nahum and Vasileios Pappas (IBM Research)

using a big trace to look at DHCP lease duration and lifetime in corporate and academic environment

Video Telephony for End-consumers: Measurement Study of Google+, iChat, and Skype (review)

Yang Xu, Chenguang Yu, Jingjiang Li, and Yong Liu (Polytechnic Institute of NYU)

this actually won the best paper award, recommend reading it! they show the effect of video and voice processing on e2e delay, they also present the techniques used for scalability

On Traffic Matrix Completion in the Internet (review)

Gonca Gursun and Mark Crovella (Boston University)

the idea is to reverse engineer traffic matrices to detect invisible flows (going through other networks), using AS topology and traffic matrices in ASes using matrix completion method.

DNS to the rescue: Discerning Content and Services in a Tangled Web (review)

Ignacio Bermudez, Marco Mellia, and Maurizio Munafo` (Politecnico di Torino) and Ram Keralapura and Antonio Nucci (Narus Inc.)

Interesting paper about the complex content delivery chain in the internet, and a service which helps classify the type of contents.

Beyond Friendship: Modeling User Activity Graphs on Social Network-Based Gifting Applications (review)
Atif Nazir, Alex Waagen, Vikram S. Vijayaraghavan, Chen-Nee Chuah, and Raissa D’Souza (UC Davis) and Balachander Krishnamurthy (AT&T Labs-Research)

aiming to model user activity on OSNs, using facebook apps data to look at user activity, power-law fits are seen for in-degrees but out degree has strong heavy tail, the node activity has to be modelled from connectivity.

Inside Dropbox: Understanding Personal Cloud Storage Services (review)

Idilio Drago (University of Twente), Marco Mellia and Maurizio M. Munafo (Politecnico di Torino), and Anna Sperotto, Ramin Sadre, and Aiko Pras (University of Twente)

Looking at dropbox data storage and file storage system, which splits the file into 4M chunks and encrypted communication, there is a communication separation between storage and control, dropbox seems to be a very popular app, mainly used by native client, experiments using planetlab shows generally they all use same data centres in US (amazon data centre for data and control in california), the slicing in chunks means that many of the files are too small and do not use the bandwidth efficiently due to TPC slow start, so even for large files it means filling the channel capacity takes longer

Content delivery and the natural evolution of DNS (review)

John S. Otto, Mario A. Sánchez, John P. Rula, and Fabián E. Bustamante (Northwestern University)

discusses use of DNS for dynamic routing, and use of openDNS and googleDNS for these purposes. CDN depends on user DNS to directly requests. different redirections mean better performance , try out namehelp for proactive cacheing

Measuring the Deployment of IPv6: Topology, Routing and Performance (review)

Amogh Dhamdhere, Matthew Luckie, Bradley Huffaker, and kc Claffy (CAIDA), Ahmed Elmukashfi (Simula), and Emile Aben (RIPE)

IPv4 addresses have run out. IPv6 has been around but not used as not backwards compatible. Hence tunnelling has been the main growth area. used measurement data from BGP and AS relationships and lots of data, and classify ASes to transit providers, content/access/hosting providers and enterprise customers. They find that IPv6 is strong at core but lagging at edge. then measured AS level paths from 7 vantage points towards dual-stacked ASes. They find V4 network maturing, and transit providers dpeloying V6, same as content providers, the edge is lagging, with Europe and Asia leading

Report on EPSRC UBHAVE workshop by Hamed Haddadi

Opportunities and Challenges in Interdisciplinary Research

Today I attended this excellent workshop arranged by EPSRC UBHAVE project team, bringing together a number of excellent computer scientists, engineers, medics, industry members, psychologists and HCI experts from across the world, discussing the challenges faced by interdisciplinary researchers, from convincing patients to carry around monitors, to setting the right interface/sampling rate/data collection strategy for devices and sensors. The interesting projects, range of smart phone apps, and the adoption of technology in form of peculiar mixes of software and hardware brought a mesmerising atmosphere. Number of challenges were highlighted:

 

  • difficulty in establishing what method and which collaborator is right
  • privacy issues and security of devices
  • economic and personal incentives to use technology
  • large delay between research grant cycle and industrial advancements
  • Lower academic rewards (promotion/etc) for interdisciplinary research
  • understanding each other!

I worked 2 years on the Huntington Disease project, and indeed, communication between biologists, engineers, mathematicians and computer scientists is a SERIOUSLY challenging issues, however it is just as vital to go through, as otherwise we face the classic problems that us (i.e., engineers and computer scientists) are constantly facing: designing systems by geeks, approved in conferences by geeks, adopted by geeks, and often failing to make it to mass markets, on the other extreme, our technology offers scale, speed and accuracy, and more importantly, ability to monitor in situ, capturing the contextual data, relieving the sociologist and biologist from privacy-intrusive and cumbersome ethnography, monitoring, lab experiments and interviews.

 

Rather than poisoning your fresh brains with my rants, I’ll let you have a look at the program yourself and click on the links ! :)

 

 

 

 

Conference Programme

 

 

Morning Session: “Making Multidisciplinary Research Work”

 

10.30 – 10.40am         Welcome and Introduction

                                    Professor Lucy Yardley, University of Southampton, UK

                                    Professor Susan Michie, University College London, UK

 

10.40 – 11.20am         “Engaging the Users in Multidisciplinary Projects: How to find them, what to do with them, and where to go next”          

                                    Professor Torben Elgaard Jensen, Technical University of Denmark

 

11.20 – 12.00pm         “Multilevel and Reciprocal Behaviour Change: The Role of Mobile and Social Technologies”

                                    Professor Kevin Patrick, University of California, San Diego, USA

 

12.00 – 12.50pm         Panel Discussion led by:

                                    Dr Niels Rosenquist, Massachusetts General Hospital

 

12.50 – 1.40pm           Buffet Lunch(First floor, South Corridor Foyer)

 

Afternoon Session 1: “The Potential of Digital Technology for Assessing and Changing Behaviour”

(Small Meeting House)

 

1.40 – 2.20pm             “Behavioural Intervention Technologies for Depression”

                                    Professor David Mohr, Northwestern University, USA

 

2.20 – 3.00pm             “My Smartphone told me I’m Stressed”   

                                    Professor Andrew Campbell, Dartmouth College, USA

 

3.00 – 3.40pm             “UBhave: Addressing the question of how best to use phones to measure and change behaviour”

                                    Professor Lucy Yardley, University of Southampton, UK

                                    Dr Cecilia Mascolo, University of Cambridge, UK

 

3.40 – 4.10pm             Coffee(First floor, South Corridor Foyer)

 

4.10 – 5.00pm             Panel Discussion led by:

                                    Professor Susan Michie, University College London, UK

                                   

5.00 – 5.30pm             Close

 

Afternoon Session 2: “Challenges of User Led Innovation for Energy Technologies”

(First floor, Room 2)

 

1.30 – 5.30pm             Led by Dr Alastair Buckley, University of Sheffield, UK


Report on SIGCOMM 2012, Helsinki (by Steve Uhlig, Networks)

http://conferences.sigcomm.org/sigcomm/2012/

1st day: HotSDN workshop attracted a lot of submissions as well as a large attendance (100+).
This reflects how topical SDN is and the growing community working on it.

2nd day:
--------

Keynote and SIGCOMM award: Nick McKeown, Stanford University.
Title: "Mind the gap".
The talk explained the "gap between theory and practice", how to improve practice especially by
putting pressure on Industry. Nick explained his strategy when trying to find out the right topic:
meet industry and when they get angry, you might be on something. Of course that does not mean
the topic is a good research one, but at least it is potentially relevant. Nick said that 
papers should be written in the same way as documentation, to explain what we are doing. So
make the problem and the story be the lead. The marketing and the flow of the paper itself 
should not have the lead. From a technical perspective, Nick explained that the point of SDN
for him is to reduce the complexity of network management (this was questioned by Dina Papagiannaki
during questions). His research strategy is to make everything public, so as to allow others 
to reproduce his work. The point is to stand on each others shoulders rather than compete. 
He offered some criticism about the SIGCOMM conference, that is in his opinion too small
(30 papers) and narrow in audience (500). He suggests to make it more like SIGGRAPH, with
50% acceptance, and more than 2000 people, among which plenty of Industry.

The test of time award went to the "Tussle in cyberspace" paper from SIGCOMM 2002. We believe
that in the future there should be a presentation because such papers have high impact for different
reasons. Hearing from the authors about the intention of the paper and what it turned out to have
impact on would be interesting for the community.

Session 1: Middlebox and Middleware
-----------------------------------
- Multi-resource Fair Queuing: Best paper award. Not clear why frankly. Old topic.
- Making middleboxes someone else's problem: Putting middleboxes in the Cloud. Such
an idea was expected, and seems to work pretty well for some specific applications.
- Hyperdex: Interesting searchable key-value store. Talk did not do justice to the
applicability of the paper to networking.

Session 2: Wireless Communication (by Cigdem Sengul from T-labs)
---------------------------------

- Picasso: Flexible RF and Spectrum Slicing  by Hong et al from Stanford University looks 
at full-duplexing (i.e., receive or transmit at the same time) in adjacent bands. They also 
presented a demo on Day 2, which I unfortunately missed seeing due to having a demo at the 
same time.  For more information on Picasso: http://www.stanford.edu/~hsiying/Picasso.html
- Spinal codes by Perry et al from MIT invents this family of rateless codes to get close 
to Shannon capacity. For more information on their research: http://nms.csail.mit.edu/spinal/
- Efficient and reliable low-power backscatter networks: treats nodes as virtual senders
and relies on collision patterns as codes.

Poster and demo session
------------------------
Lot of variety in the topics and quite interesting. The sessions were attended by a lot of
people, very good for visibility!

Session 3: Data Centers: Latency
---------------------------------
- Deadline aware datacenter TCP: changing TCP to improve it meets deadlines when latency is
an issue.
- Finishing flows quickly: Same goal as previous paper but through flow scheduling.
- DeTail: reducing flow completion time tail: again same story, but not cross-layer approach...

Session 4: Measuring Networks (mostly by Cigdem Sengul from T-labs)
-----------------------------
- Inferring visibility: inference techniques to try to guess which paths cross or do not cross
a given AS. Very centric on the inference techniques, not very much on whether it works.
- Anatomy of a Large European IXP: The paper emphasizes once more how the Internet is not 
what we think it is - with hypergiants and CDNs (content delivery network) getting flatter. 
What is interesting is that data from a single IXP captures all we know of the Internet 
(through several BGP based studies, and measurement data) and adds to that by showing the 
vast number of connections in the Internet.
- Measuring and Fingerprinting Click-Spam in Ad Networks by Vache et al. from UT Austin and  
MSR India. For me, this is the best and the most enjoyable presentation of the entire conference. 
In this work, the authors presented a measurement methodology for identifying Click-Spam in 
advertisement networks, and digging into the data to identify fraud activities. Their impressive 
results show the pervasiveness of the Click-spam especially in the mobile advertising context, 
which is also interesting not for only an Internet researcher but also an Internet user. The 
authors also warn that this is an open problem which they do not expect to go away in a long time. 

Session 5: Data Centers: Resources Management
---------------------------------------------
- FairCloud: sharing the network in Cloud computing
- The only constant in change: incorporating time-varying network reservations in data centers
- It's not easy being green: tradeoff between access latency, carbon footprint, and electricity 
costs.

Session 6: Wireless and Mobile Networking
-----------------------------------------
- Skipped.

Poster and demo session (2)
---------------------------
Again posters and demo's.

Session 7: Best of CCR
-----------------------
This session was dedicated to the best of CCR talks, where the best papers from the CCR was 
presented by their authors. The papers were (1) Forty Data Communication Research Questions by 
Craig Partridge, (2) Extracting Benefit from Harm: Using Malware Pollution to Analyze the 
Impact of Political and Geophysical Events on the Internet by Albert Dainotti et al and (3) 
The Collateral Damage of Internet Censorship by DNS Injection, an anonymous submission presented 
by Philip Levis.

Session 7: Network Formalism and Algorithmics
---------------------------------------------
- Perspectives on Network Calculus: nice tutorial and update on the state-of-the-art.
- Abstractions for network update: using fine-granular flow-level abstraction to apply
network updates in such a way that packets won't be stuck between network states.
- Pre-classifier to reduce TCAM consumption: TCAM's main drawback is their power consumption.
This paper relies on a pre-classifier to be able to switch off part of the regions of the TCAM.

Session 8: Streaming and Content Networking
-------------------------------------------
- ShadowStream: Adding performance evaluation to the capabilities of a streaming platform.
- Case for Coordinated Internet-scale control plane: Conviva marketing their data and
selling the case for a black-box control plane based on this data. Audience did not buy it
from the questions.
- Optimizing cost and performance for multihoming: redirecting users to improve QoE. Not
very convincing as again it is black-box as the previous one.

Session 9: Routing
------------------
- Private and verifiable interdomain routing decisions: system to help peers of a network
to prove that it propagates the wrong routes.
- LIFEGUARD: practical repair of persistent route failures: Assuming that one is able to
locate connectivity failures, the paper proposes to help ISPs through poisoning of the
faulty paths. Very incremental compared to previous work and hard to buy...
- On Chip Networks from a Networking Perspective: Congestion and Scalability in Many Core 
Interconnects by George Nychis et al from CMU and Microsoft Asia. The paper is indeed 
interesting and shows how certain networking solutions apply and do not apply on On Chip 
Networks. However, I am still thinking what the exact takeaways might be for the Networking 
Community. 

Session 10: Data Centers: Network Resilience
--------------------------------------------
- NetPilot: automating datacenter network failure detection: deactivate and restart offending
equipment.
- Surviving failures in bandwidth-constrained datacenters: exploiting traffic patterns to
improve behavior under failure.
- Mirror Mirror on the Ceiling: Flexible Wireless Links for Data Centers by Xia Zhou et al from 
UC Santa Barbara. Again an interesting marriage between different topics - the use of 60GhZ in 
wireless links. However, while the mirror idea for reducing interference in these networks is 
very interesting for the wireless networks, it is not clear how data centers will benefit 
from such comparably low-capacity wireless links. Furthermore, scheduling of multiple concurrent 
links seems to be an unsolved issue. 

Overall while SIGCOMM papers are extremely strong on their evaluation and execution, few are 
really inspiring and tackling fundamental issues in data communications. Part of the problem
might be that talks are too centered on the marketing of the paper itself, and not enough on
the challenges in the area.

Future Network Technologies Research and Innovation in HORIZON2020

Our faculty have been invited to present their vision the workshop that will take place on 29th June in Brussels to present your ideas for HORIZON2020 Future Networks Research.

Prof. Steve Uhlig & Dr. Hamed Haddadi, Queen Mary, University of London, UK.

Innovation for the Internet: the need to engage all stakeholders

 ABSTRACT

The Internet is evolving at a significant pace due to new usage trends and platforms such as mobile devices, social media, streaming networks and content delivery platforms. Within the next EU framework, the researchers need to focus on the future trends, devices and usage habits and strategically align their research to support those needs. In this document, we propose a number of challenges, related to the new interactions between different stakeholders. We also discuss how today’s Internet ecosystem requires to revisit not only the functionalities of the network, but also to rethink the different business models that will shape the future Internet. We also suggest that the Societal relevance of the Internet should be more supported by the Horizon 2020 agenda, as well as encourage that future projects have wider and more specific public engagement and community reach plans, engaging all stakeholders such as user communities, industrial bodies, the research community, policy makers and the Internet governing bodies.

 

Motivation: Today’s changing Internet ecosystem

Today’s Internet [1] differs significantly from the one that is described in popular textbooks [2], [3], [4]. The early commercial Internet had a strongly hierarchical structure, with large transit Internet Service Providers (ISPs) providing global connectivity to a multitude of national and regional ISPs [5].  Most of the applications/content was delivered by client-server applications that were largely centralized. With the recent advent of large-scale content distribution networks (CDNs), e.g., Akamai, Youtube, Yahoo, Limelight, and One Click Hosters (OCHs), e.g., Rapidshare, MegaUpload, the way the Internet is structured and traffic is delivered has fundamentally changed [1].

 

Today, the key players in the application and content delivery ecosystem, e.g., Cloud providers, CDNs, OCHs, data-centers and content sharing websites such as Google and Facebook which often have direct peerings with Internet Service Providers or are co-located within ISPs.  Application and content delivery providers rely on massively distributed architectures based on data centers to deliver their content to the users. Therefore, the Internet structure is not as strongly hierarchical as it used to be [1].

 

These fundamental changes in application and content delivery and Internet structure have deep implications on how the Internet will look like in the future. Hereafter, we describe how we believe that three different aspects of the Internet may lead to significant changes in the way we need to think about the forces that shape the flow of traffic in the Internet. Specifically, we first describe how central DNS has become as a focal point between application/content providers and ISPs. Next, we discuss how software-defined networking may change the ability of many stakeholders to influence the path that the traffic belonging to specific flows will follow across the network infrastructure. Finally, we discuss how the distributed nature of existing application and content delivery networks will, together with changes within the forwarding/routing, enable much more advanced handling of the traffic, on a much finer granularity compared to the current Internet.

 

Challenge 1: DNS and Server Redirection

 

The Domain Name System (DNS) was originally intended to provide a naming service, i.e., one-to-one mappings between a domain name and an IP address. Since then, DNS has evolved into a highly scalable system that fulfils the very stringent needs of applications in terms of its responsiveness [6,7,8]. Today, the DNS system is a commodity infrastructure that allows applications and content providers to map individual users to servers. This behaviour diverges from the original purpose of deploying DNS [10]. As application and content delivery infrastructures control how DNS is used to map end-users to their servers, the transport network, namely ISPs, has very limited control as to how traffic flows across the Internet [31]. Note that the case of DNS is a specific instance of a more general class of mapping systems for networked applications, such as trackers used in P2P or Locator/ID split approaches, e.g., LISP. Whatever the actual mapping system being used, the use of DNS by application/content providers is a sign that network-aware application optimization approaches are needed. P4P as well as Application-layer Traffic Optimization (ALTO) are possible solutions for this. Direct CDN-ISP collaboration is another way of ensuring that the application side and the network collaborate to provide the best possible service to the end-users in a cost-efficient manner [32].

 

Challenge 2: Software-defined networking

 

Applications and content are not the only place where an Internet (r)evolution is taking place. Thanks to a maturing market that is now close to “carrier grade” [13,14,15,16,17], the deployment of open source based routers has significantly increased during the last few years. While these devices are not competing with commercial high-end switches and routers available with respect to reliability, availability and density, they are fit to address specialized tasks within enterprise and ISP networks. Even PC-based routers with open source routing software are evolving fast enough to foresee their use outside research and academic environments [18,19,20].

 

The success of open-source routing software is being paralleled with increasing virtualization, not only on the server side, but also inside network devices. Server virtualization is now followed by network virtualization, which is made possible thanks to software-defined networking, e.g., OpenFlow [21] that expose the data path logic to the outside world. The model of network devices controlled by proprietary software tied to specific hardware will slowly but surely be made obsolete. Innovation within the network infrastructure will then be possible. A decade ago, IP packets were strictly following the paths decided by routing protocols. Tomorrow, together with the paths chosen by traditional routing protocols, a wide range of possibilities will arise to customize not only the path followed by specific traffic, but also the processing that this traffic undergoes. Indeed, specific actions that are statically performed today by specialized middleboxes placed inside the network, e.g., NAT, encryption, DPI, will be implemented on-path if processing capabilities happen to exist, otherwise the traffic will be dynamically redirected to close-by computational resources. This opens a wide range of applications that could be implemented almost anywhere inside the network infrastructure.

 

Fusing the transport network and applications/content

 

As content is moving closer to the end-user for improved quality of experience and the infrastructure opens up to unprecedented control and flexibility, the old business model of hierarchical providers and customer-provider relationships is hardly viable. Nowadays, delivering applications and content to end-users is becoming a less and less profitable business, except for the few able to capitalize on the revenues from advertising, e.g., Google, Facebook. On the other side, network infrastructure providers struggle to provide the necessary network bandwidth and low latency for these applications, at reasonable costs. The consequence of more and more limited ISP profit margins is a struggle between content providers and the network infrastructure to gain control of the traffic.

 

This struggle stems from fundamental differences in the business model of applications/content providers and ISPs. Today, application/content providers, for example through DNS tweaking, decide about the flow of the traffic by properly selecting the server from which a given user fetches some content [8,22,23]. This makes application/content delivery extremely dynamic and adaptive. On the ISP side, most of the traffic engineering relies on changing the routing configuration [24,25,26]. Tweaking existing routing protocols is not only dangerous, due to the danger of mis-configurations [27], routing instabilities [28] and convergence problems [29,30], but is simply not adequate to choose paths at the granularity of applications and content.

 

Industry and academia must join forces to address the challenges posed by the evolving Internet. We believe that the three research areas above need critical input from the community in order to enable a truly content-centric Internet. First, even after more than two decades of deployment and evolution, the DNS is still poorly understood. The DNS is much more than a naming system, it is a critical mapping system and a critical point in the application/content distribution arena. Second, software-defined networking opens a wide range of possibilities that would transform the current dumb pipes of the Internet core into a flexible and versatile infrastructure. Further, software-defined networking researchers has the ability to allow injecting intelligence inside the network without having to think about how it will affect a whole range of legacy protocols.

 

One way to go is to enable the different stakeholders to work together, e.g., enable ISPs to collaborate with application/content providers [31,32]. This can be achieved for example by exploiting the diversity in content location to ensure that ISP’s network engineering is not made obsolete by content provider decisions [31,32] or the other way around. Another option in which we believe is to leverage the flexibility in network virtualization and making their infrastructure much more adaptive than today’s static provisioning [33].

 

New Internet business models and privacy

 

The networks research community has been witnessing an explosive growth in the adoption of wireless devices such as smartphones and tablets. This new fertile market has been fueled by applications and games brought through multiple markets of third party developers.  These markets today rely on “App Stores” provided and controlled by device or operating system manufacturers such as  Apple or Google, now recently joined by Facebook. At the heart of this trade lies a particular revenue model: provide attractive content and applications, and in return benefit from a trusted ecosystem built from a large number of users. Majority of these ecosystems revolve around targetted advertising and use of personal information. Several recent proposals have been made by the networks and social computing research community, on enabling market places for personal information [34,35].

 

It has been suggested that personal data is the new currency on the Internet. This highlights the urgent need for understanding privacy issues, which requires engagement with policy makers and investing in new methods to create federated marketplaces for resources and data.

 

Engaging all stakeholders

 

The deep changes we discussed create unprecedented opportunities for industry and researchers to develop new solutions that will address not only relevant operational challenges, but also potentially business-critical ones. The ossification of the Internet protocols does not mean that the Internet is not evolving. The Internet has changed enormously over the last decade, and will continue to do so, no matter what. What we observe today is a convergence of applications/content and network infrastructure that questions a model of the Internet that used to separate two stakeholders: application/content infrastructures on the one side and a dumb transport network on the other.

 

The fundamental changes in the Internet lead to fundamental questions about the possible directions in which the Internet might be going, not only at a technical level, but also from a business perspective. These are Societal questions, that ask for answers for the sake of Internet governance and to ensure that the infrastructure is serving the purposes of the Society as a whole, not of a few business players. Emphasis must also be placed on engagement with users as the focal point of the ecosystem, not only business stakeholders.

 

 

Active Engagement with the European Community and Beyond

 

Traditionally, EU projects in the networking area have not been strongly urged to engage with the public, but focus their attention on the impact for European Industry. Given the Societal relevance of the Internet in supporting the Digital Economy, we encourage that future projects have wider and more specific public engagement and community reach plans, engaging user communities, industrial bodies, the research community, policy makers and the Internet governing bodies. This approach will encourage working beyond the usual outputs in the form of periodic reports and standard workshops that do not reach the relevant audience. Re-focusing the dissemination and impact criteria during project evaluation would incentivize projects to target long-term growth and innovation in Europe. We feel that today impact and dissemination play mostly a role at satisfying short-term industrial or business use-cases, which are heavily biased by industrial partners during review process of project proposals for impact.

 

Lastly, we encourage the inclusion of research and development organisations in China, India, Brazil and similar developing countries which are shaping the future of the network usage trends. Indeed, we now live in a globalized world, meaning that EU project should compete with their US and Chinese counterparts, both in terms of agenda but also in terms of their reach and impact.

 

[1] C. Labovitz, S. Lekel-Johnson, D. McPherson, J. Oberheide, and F. Jahanian, “Internet Inter-Domain Traffic,” in Proc. of ACM SIGCOMM, 2010.

[2] K. Claffy, H. Braun, and G. Polyzos, “Traffic Characteristics of the T1 NSFNET backbone,” in Proc. of IEEE INFOCOM, 1993.

[3] K. Thompson, G. Miller, and R. Wilder, “Wide-Area Internet Traffic Patterns and Characteristics,” IEEE Network Magazine, 11(6), November/December 1997.

[4] W. Fang and L. Peterson, “Inter-AS Traffic Patterns and their Implications,”  in Proc. of IEEE Global Internet Symposium, 1999.

[5]  L. Subramanian, S. Agarwal, J. Rexford, and R. Katz, “Characterizing  the Internet Hierarchy from Multiple Vantage Points,” in Proc. of IEEE INFOCOM, 2002.

[6] B. Krishnamurthy, C. Wills, and Y. Zhang, “On the Use and Performance of Content Distribution Networks,” in Proc. of ACM IMW, 2001.

[7] R. Krishnan, H. Madhyastha, S. Srinivasan, S. Jain, A. Krishnamurthy, T. Anderson, and J. Gao, “Moving Beyond End-to-end Path Information to Optimize CDN Performance,” in Proc. of ACM Internet Measurement Conference, 2009.

[8] T. Leighton, “Improving Performance on the Internet,” Communications of the ACM, 52(2):44–51, 2009.

[9] J. Jung, E. Sit, H. Balakrishnan, and R. Morris, “DNS Performance and the Effectiveness of Caching,” IEEE/ACM Trans. Netw., 10(5):589–603, 2002.

[10] P. Vixie, “What DNS is Not,” Commun. of the ACM, vol. 52, no. 12, 2009.

[11] B. Ager, W. Muehlbauer, G. Smaragdakis, and S. Uhlig, “Comparing DNS Resolvers in the Wild,” in Proc. of ACM Internet Measurement Conference, 2010.

[12] C. Contavalli, W. van der Gaast, S. Leach, and D. Rodden, “Client IP Information in DNS requests,” IETF draft, work in progress, draftvandergaast-edns-client-ip-00.txt, Jan 2010.

[13] “Quagga Routing Suite,” http://www.quagga.net.

[14] M. Handley, O. Hodson, and E. Kohler, “XORP: an Open Platform for Network Research,” ACM Comp. Comm. Rev., vol. 33, no. 1, 2003.

[15] J. Edwards, “Enterprises Cut Costs with Open-source Routers,” http:

//www.computerworld.com/s/article/9133851, 2009.

[16] “IP Infusion ZebOS,” http://www.ipinfusion.com/.

[17] Arista Networks, “EOS: An Extensible Operating System,” www.aristanetworks.com/en/EOS, 2009.

[18] E. Kohler, R. Morris, B. Chen, J. Jannotti, and F. Kaashoek, “The Click Modular Router,” ACM Trans. Comput. Syst., 18(3):263– 297, August 2000.

[19] N. Egi, A. Greenhalgh, M. Handley, M. Hoerdt, F. Huici, and L. Mathy, “Towards High Performance Virtual Routers on Commodity Hardware,” in Proc. of ACM CoNEXT, 2008.

[20] M. Dobrescu, N. Egi, K. Argyraki, B. Chun, K. Fall, G. Iannaccone, A. Knies, M. Manesh, and S. Ratnasamy, “RouteBricks: Exploiting Parallelism to Scale Software Routers,” in Proc. of ACM SOSP, 2009.

[21] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, “OpenFlow: Enabling Innovation in Campus Networks,” ACM Comp. Comm. Rev., 2008.

[22] C. Huang, A. Wang, J. Li, and K. W. Ross, “Measuring and Evaluating Large-scale CDNs,” in Proc. of ACM Internet Measurement Conference, 2008. Paper withdrawn at Microsoft request.

[23] S. Triukose, Z. Al-Qudah, and M. Rabinovich, “Content Delivery Networks: Protection or Threat?” in Proc. of ESORICS, 2009.

[24] B. Fortz and M. Thorup, “Internet Traffic Engineering by Optimizing OSPF Weights,” in Proc. of IEEE INFOCOM, 2000.

[25] B. Fortz and M. Thorup, “Optimizing OSPF/IS-IS Weights in a Changing World,” IEEE Journal in Selected Areas in Communications, 20(4):756–767, 2002.

[26] Y. Wang, Z. Wang, and L. Zhang, “Internet Traffic Engineering Without Full Mesh Overlaying,” in Proc. of IEEE INFOCOM, 2001.

[27] R. Mahajan, D. Wetherall, and T. Anderson, “Understanding BGP Misconfigurations,” in Proc. of ACM SIGCOMM, 2002.

[28] C. Labovitz, G. R. Malan, and F. Jahanian, “Internet Routing Instability,” in Proc. of ACM SIGCOMM, 1997.

[29]  T. Griffin and G. Wilfong, “An Analysis of BGP Convergence Properties,” in Proc. of ACM SIGCOMM, 1999.

[30] C. Labovitz, A. Ahuja, A. Bose, and F. Jahanian, “Delayed Internet Routing Convergence,” in Proc. of ACM SIGCOMM, 2000.

[32] Ingmar Poese, Benjamin Frank, Bernhard Ager, Georgios Smaragdakis, Steve Uhlig, Anja Feldmann, “Improving Content Delivery with PaDIS,” IEEE Internet Computing, 16(3):46-52, May-June 2012.

[33]  J. He, R. Zhang-Shen, Y. Li, C.-Y. Lee, J. Rexford, and M. Chiang,  “DaVinci: Dynamically Adaptive Virtual Networks for a Customized  Internet,” in Proc. of ACM CoNEXT, 2008.

[34] Hamed Haddadi, Richard Mortier, Steven Hand, Ian Brown, Eiko Yoneki, Derek McAuley and Jon Crowcroft: “Privacy Analytics”. ACM SIGCOMM Computer Communication Review, April 2012.

[35] Christina Aperjis and Bernardo A. Huberman. A Market for Unbiased Private Data: Paying Individuals According to their Privacy Attitudes. Available at http://dx.doi.org/10.2139/ssrn.2046861, April 2012.

Report from ACM EuroSys MPM2012 workshop

Measurement, Privacy, and Mobility (MPM 2012) 

http://www.cambridgeplus.net/MPM12/program.html

Keynote from Steve Uhlig on content delivery platforms, agile network measurement, and understanding the can ecosystem, the adaptation to change in demand is slow today, so it will be better to use virtualisation technologies to manage the demand shifts. There is growing infrastructure and storage diversity which allows for universal content delivery, so virtualisation can enable mobility and agile services.
dswiss. Secure safe
Attackers have a variety of methods for accessing the data, password solutions are not enough. It is possible to scan the whole ipv4 address space in a day. Trust on cloud providers is based on social prestige. They use secure remote password (srp) in order to avoid MITM attacks on passwords even on insecure channels. In addition to that, a number of key chains and symmetric and asymmetric keys are used to enable document sharing, however if the user forgets their password AND their recovery code, the data is deleted. Encourages providers to prevent employee access to data.

David Evans, malfunction analysis and privacy attacks
Sensors in buildings have privacy implications since they are not protected. Classifying data using tags, enable reflection of physical environment and do reasoning on privacy implications of sensing in the physical world. Tags can be based on sensor, location and time. This allows for analysis of sensitivity of data in different context and using different data sources in conjunction with one another.

Miguel Nunez, Markova based location mobile prediction
Predicting trajectories is important for services such as content delivery, tourist information, weather reports etc. this has been done using raw trajectories or clustering of trajectories using semantic mapping. Using Markova models allows probabilistic prediction of sequence of states, using density joint clusters. They used microsoft geolife data set and their own data set to train and test the n-MMC which gives them about 70-80% accuracy especially for higher number of user POIs.
ANOSIP: Anonymizing the SIP Protocol
Iraklis Leontiadis (Institute Eurecom)
SIP Used often for phone conferencing, with text based call flow messages. The aim of the work is to protect the ID of user from the call portals or man in the middle attacks. Use a number of techniques to achieve this.

Online Privacy: From Users to Markets to Deployment
Dr Vijay Erramilli (Telefónica I+D Research, Spain)
Economic model of web: free service for personal data, so advertising and economy is the main driver. They want to understand monetization aspect. Check paper on arxiv. They carried out questionnaire using browser plugin to ask users about value of their actions. Highly revisited data and sites yield high gains. Conducting economics and marketing solutions to understand the ecosystem more.

Confidential Carbon Commuting
Chris Elsmore, Anil Madhavapeddy, Ian Leslie, and Amir
Understanding employee commute is important, however it is hard to collect the data. University used an app to collect user data. Personal container is used for data aggregation. It allows sensitive questions to be asked about employee habits. Check lockerproject.org
The Impact of Trace and Adversary Models on Location Privacy Provided by K-anonymity
Volkan Cambazoglu and Christian Rohner (Uppsala University)
Used trace generation on different walk models for simulating locations. Used k-anonymity for identity protection and obfuscation for time of event hiding.

An Empirical Study on IMDb and its Communities Based on the Network of Co-Reviewers , Maryam Fatemi and Laurissa Tokarchuk
Interaction between people and content on social networks is important. There are a number of recommendation systems available but they suffer from shortcomings. A number of methods are used for comparison of movie review communities on imdb. So must take into account genres and context.

Providing Secure and Accountable Privacy to Roaming 802.11 Mobile Devices , Panagiotis Georgopoulos, Ben McCarthy, and Christopher Edwards
Mobile devices require connectivity and security. Differences in protocols and accesses point configs effect user mobility. An eduroam equivalent can work. Use CUI RFC4372. The idea is that request is anonymous access network, but relays alias to home network for authentication. Real ipv6 deployment test is done in lancaster.

When Browsing Leaves Footprints – Automatically Detect Privacy Violations
Hans Hofinger, Alexander Kiening, and Peter Schoo (Fraunhofer Research Institution for Applied and Integrated Security AISEC)
Introduced prividor, privacy violation detector via browser add-on. There are a large number of web techniques for user tracking such as cookies and scripts. A database is used for keeping track of bad sites, in addition to code checking. A centralised version is chosen for better management.

 

EPSRC COMNET Workshop report, February 9-10, QMUL

Report for EPSRC Workshop on Social Networks and Communications

http://www.commnet.ac.uk/node/42

9-10 February, Queen Mary University of London

Organisers: Hamed Haddadi, Laurissa Tokarchuk, Mirco Musolesi, Tristan Henderson

The COMNET workshop was aimed at bringing leading researchers and
academics working within the Digital Economy and Networking research
in UK together. Over the two days, over 50 people from academic and
industrial institutions attended the workshop. The workshop was very
interactive, with a very low number of engaging talks, a number of
“proposal writing” and “challenge solving sessions, and a high number
of informal introductions and project bootstrapping. The high number
of emails, messages and interactions on social networks afterwards was
indicative of the success of the workshop, a program for which can be
found on http://www.commnet.ac.uk/node/42 .

The keynote talk was delivered by Professor Yvonne Rogers (UCL), expert in Human-Computer Interaction (HCI). She highlighted the need for designing equipment and websites, which are also suitable for elderly, disabled or less educated members of the society. She also demonstrated a range of simple products and ideas enabling shoppers to understand the healthiness of their products. The talk was followed by an individual introduction by every participant, where research interests and industrial relevance were discussed.

The afternoon session of the first day was followed by talks from Dr Abhijit Sengupta, (Unilever) and Dr Stuart Battersby, (Chatterbox Analytics) who both discussed the new use of digital media and social media for advertising and brand marketing. They highlighted the strong need for collaboration between graph theorists, complex network researchers and NLP experts in order to understand the large volume of data. This is also in line with EPSRC Big Data research focus area.

Cecilia Mascolo (Uni. of Cambridge) delivered the Friday morning talk on different aspects of research on social networks and challenges, which are to be solved. This talk was followed immediately by the second break out group exercise, which aimed to solve some of the challenges in making social networks more secure for users and useful for different organisations, being friendship recommendation websites or crowd control scenarios.

 

Professor Derek McAuley (Horizon Digital Economy) concluded the workshop with an overview of the discussions, challenges and ideas presented and brought up in the workshop, discussing possible potential avenues for research into digital economy. Some of which are listed below.
Overall, the participants discussed a number of ethical, securities, scalability issues around digital economy themed projects such as green networking, human-computer interaction, Online Social Networks and personal data.

The researchers highlighted a number of strategic areas where more
cooperation and collaboration between academia, industry and
governments is required:
i) Scaling up social science and scaling down complex systems
research: Currently, there is a big gap between social networks
researchers, focusing on long term monitoring and study of a very low
number of subjects, and complex systems researchers, trying to crunch
data about millions of users without focusing on individual
interactions. This gap needs to be narrowed.
ii) Clear ethics: researchers should take more responsibility towards
collection, storage and sharing of publicly available data, especially
since aggregation fo such data can ease correlations and inferences.
We can also drive this forward via innovative systems.
iii) Formation of incentives: experiments should aim to bring out
right incentives, and to include a diverse range of participants.
iv) Think Globally: Currently, The law is always lagging behind
technology and usually Technology designed in a single country and
deployed everywhere, so researchers must take into account ethical,
cultural and moral implications.
v) Digital inclusion: There is need for more work on HCI and easier
access technology for inclusion of older generation to the Internet
services.
As one researcher puts it, “ At some points I even felt like suggesting to the others that we should write a grant proposal about our ideas”, and another” a
very interesting and insightful workshop. I enjoyed the discussions immensely.” And some personal blog post:

http://www.syslog.cl.cam.ac.uk/2012/02/11/we-are-all-social-scientists-now-but-werent-we-born-that-way-anyway/

We acknowledge the EPSRC COMNET program for providing funding for this
exciting workshop and hope to be able to organize further future
events regularly.