Report on SIGCOMM 2012, Helsinki (by Steve Uhlig, Networks)

1st day: HotSDN workshop attracted a lot of submissions as well as a large attendance (100+).
This reflects how topical SDN is and the growing community working on it.

2nd day:

Keynote and SIGCOMM award: Nick McKeown, Stanford University.
Title: "Mind the gap".
The talk explained the "gap between theory and practice", how to improve practice especially by
putting pressure on Industry. Nick explained his strategy when trying to find out the right topic:
meet industry and when they get angry, you might be on something. Of course that does not mean
the topic is a good research one, but at least it is potentially relevant. Nick said that 
papers should be written in the same way as documentation, to explain what we are doing. So
make the problem and the story be the lead. The marketing and the flow of the paper itself 
should not have the lead. From a technical perspective, Nick explained that the point of SDN
for him is to reduce the complexity of network management (this was questioned by Dina Papagiannaki
during questions). His research strategy is to make everything public, so as to allow others 
to reproduce his work. The point is to stand on each others shoulders rather than compete. 
He offered some criticism about the SIGCOMM conference, that is in his opinion too small
(30 papers) and narrow in audience (500). He suggests to make it more like SIGGRAPH, with
50% acceptance, and more than 2000 people, among which plenty of Industry.

The test of time award went to the "Tussle in cyberspace" paper from SIGCOMM 2002. We believe
that in the future there should be a presentation because such papers have high impact for different
reasons. Hearing from the authors about the intention of the paper and what it turned out to have
impact on would be interesting for the community.

Session 1: Middlebox and Middleware
- Multi-resource Fair Queuing: Best paper award. Not clear why frankly. Old topic.
- Making middleboxes someone else's problem: Putting middleboxes in the Cloud. Such
an idea was expected, and seems to work pretty well for some specific applications.
- Hyperdex: Interesting searchable key-value store. Talk did not do justice to the
applicability of the paper to networking.

Session 2: Wireless Communication (by Cigdem Sengul from T-labs)

- Picasso: Flexible RF and Spectrum Slicing  by Hong et al from Stanford University looks 
at full-duplexing (i.e., receive or transmit at the same time) in adjacent bands. They also 
presented a demo on Day 2, which I unfortunately missed seeing due to having a demo at the 
same time.  For more information on Picasso:
- Spinal codes by Perry et al from MIT invents this family of rateless codes to get close 
to Shannon capacity. For more information on their research:
- Efficient and reliable low-power backscatter networks: treats nodes as virtual senders
and relies on collision patterns as codes.

Poster and demo session
Lot of variety in the topics and quite interesting. The sessions were attended by a lot of
people, very good for visibility!

Session 3: Data Centers: Latency
- Deadline aware datacenter TCP: changing TCP to improve it meets deadlines when latency is
an issue.
- Finishing flows quickly: Same goal as previous paper but through flow scheduling.
- DeTail: reducing flow completion time tail: again same story, but not cross-layer approach...

Session 4: Measuring Networks (mostly by Cigdem Sengul from T-labs)
- Inferring visibility: inference techniques to try to guess which paths cross or do not cross
a given AS. Very centric on the inference techniques, not very much on whether it works.
- Anatomy of a Large European IXP: The paper emphasizes once more how the Internet is not 
what we think it is - with hypergiants and CDNs (content delivery network) getting flatter. 
What is interesting is that data from a single IXP captures all we know of the Internet 
(through several BGP based studies, and measurement data) and adds to that by showing the 
vast number of connections in the Internet.
- Measuring and Fingerprinting Click-Spam in Ad Networks by Vache et al. from UT Austin and  
MSR India. For me, this is the best and the most enjoyable presentation of the entire conference. 
In this work, the authors presented a measurement methodology for identifying Click-Spam in 
advertisement networks, and digging into the data to identify fraud activities. Their impressive 
results show the pervasiveness of the Click-spam especially in the mobile advertising context, 
which is also interesting not for only an Internet researcher but also an Internet user. The 
authors also warn that this is an open problem which they do not expect to go away in a long time. 

Session 5: Data Centers: Resources Management
- FairCloud: sharing the network in Cloud computing
- The only constant in change: incorporating time-varying network reservations in data centers
- It's not easy being green: tradeoff between access latency, carbon footprint, and electricity 

Session 6: Wireless and Mobile Networking
- Skipped.

Poster and demo session (2)
Again posters and demo's.

Session 7: Best of CCR
This session was dedicated to the best of CCR talks, where the best papers from the CCR was 
presented by their authors. The papers were (1) Forty Data Communication Research Questions by 
Craig Partridge, (2) Extracting Benefit from Harm: Using Malware Pollution to Analyze the 
Impact of Political and Geophysical Events on the Internet by Albert Dainotti et al and (3) 
The Collateral Damage of Internet Censorship by DNS Injection, an anonymous submission presented 
by Philip Levis.

Session 7: Network Formalism and Algorithmics
- Perspectives on Network Calculus: nice tutorial and update on the state-of-the-art.
- Abstractions for network update: using fine-granular flow-level abstraction to apply
network updates in such a way that packets won't be stuck between network states.
- Pre-classifier to reduce TCAM consumption: TCAM's main drawback is their power consumption.
This paper relies on a pre-classifier to be able to switch off part of the regions of the TCAM.

Session 8: Streaming and Content Networking
- ShadowStream: Adding performance evaluation to the capabilities of a streaming platform.
- Case for Coordinated Internet-scale control plane: Conviva marketing their data and
selling the case for a black-box control plane based on this data. Audience did not buy it
from the questions.
- Optimizing cost and performance for multihoming: redirecting users to improve QoE. Not
very convincing as again it is black-box as the previous one.

Session 9: Routing
- Private and verifiable interdomain routing decisions: system to help peers of a network
to prove that it propagates the wrong routes.
- LIFEGUARD: practical repair of persistent route failures: Assuming that one is able to
locate connectivity failures, the paper proposes to help ISPs through poisoning of the
faulty paths. Very incremental compared to previous work and hard to buy...
- On Chip Networks from a Networking Perspective: Congestion and Scalability in Many Core 
Interconnects by George Nychis et al from CMU and Microsoft Asia. The paper is indeed 
interesting and shows how certain networking solutions apply and do not apply on On Chip 
Networks. However, I am still thinking what the exact takeaways might be for the Networking 

Session 10: Data Centers: Network Resilience
- NetPilot: automating datacenter network failure detection: deactivate and restart offending
- Surviving failures in bandwidth-constrained datacenters: exploiting traffic patterns to
improve behavior under failure.
- Mirror Mirror on the Ceiling: Flexible Wireless Links for Data Centers by Xia Zhou et al from 
UC Santa Barbara. Again an interesting marriage between different topics - the use of 60GhZ in 
wireless links. However, while the mirror idea for reducing interference in these networks is 
very interesting for the wireless networks, it is not clear how data centers will benefit 
from such comparably low-capacity wireless links. Furthermore, scheduling of multiple concurrent 
links seems to be an unsolved issue. 

Overall while SIGCOMM papers are extremely strong on their evaluation and execution, few are 
really inspiring and tackling fundamental issues in data communications. Part of the problem
might be that talks are too centered on the marketing of the paper itself, and not enough on
the challenges in the area.