This year Google I/O was reported to have 23% women in attendance (which was up from 20% in 2014 and 8% in 2013). It was my second year attending Google I/O, and both times were through the Women Techmakers (WT) program, which is behind the incredible boost in numbers.
It’s pretty rare to have such a big gathering of women tech-y people, and even less common for the gathering to be from all over the world (perhaps Grace Hopper would be similar, but I have yet to attend GHC). It was great to be able to sit down to dinner with many of them at one of the pre-I/O dinners which the WT program puts on. And even before we had gotten there, because we had been chatting over Slack, there was a feeling of community when we got there.
This feeling of community lasted throughout the course of I/O, and to quote one of the other women who attended “It was like already knowing pretty much every other woman there already”. With most conferences, there are always things that ‘in the know’ people get to do that others don’t (parties, special activities, or extra hidden content). Because of this community though, it felt like I heard about most of the bonus activities and could prioritize my time accordingly.
Aside from waxing poetic on the WT program, there were some announcements this year that I was very excited about. Most of them happened at the second ‘keynote’ event (very loosely a keynote as it was not anywhere on the same scale as the actual keynote) held by one of the special units called ATAP (Advanced Technology and Projects) which is headed by Regina Dugan. In my humble opinion, the announcements happening here were way more exciting than at the actual keynote although I can understand the business reasons for this not being a keynote.
ATAP is basically a smaller scale Google X, but way more focused on products that have show really good promise in 2 years or less. This year, they demoed a radar based 3-D gesture control about the size of a dime dubbed Project Soli.
Now, there have been other touchless 3D gesture controls like Leap Motion, but from the demo, this one seemed way more accurate. It could detect the kind of movements you might make to turn the dial on a watch, which is to say really small movements. I really want to use it as a controller in a VR environment whenever it comes to market. Especially if the cost is negligible, combining it with something like Google Cardboard as a super low cost entry into the VR market could finally make VR hit critical mass (I could also just read too many sci-fi novels).
My other favorite demo is the ARA modular phone demo. Regina, who was the MC for the demos, introed this as ‘Oh, just one more thing’ completely nonchalantly. I’ve been hearing rumors of a smartphone that you could add on and upgrade parts of, but it hadn’t been been demoed to public eyes until right then.
This is basically the modders dream. There are existing mods like the fish eye lenses, the Square credit card reader, and other ones that have tried to take advantage of the few ports that are available on a smart phone or via bluetooth, but this makes the possibilities so much wider. Some thoughts I had:
- Faster upgrade cycles, you only have to upgrade the parts you care about. This means a more incremental revenue stream for device makers since they don’t have to wait out the full 2-3 year cycle that most people take to upgrade to a new phone.
- More competition since the components are smaller which means development can be done with a smaller team than the one needed to create a full phone.
- Polaroid module (or any other small scale printed thing)
- Blood test modules (somewhat already existent as an add on), swap them whenever you need to do a different test. But the modular nature of the phone could mean you would get rid of the other non-test essential components of the phone when it is acting as a mobile lab.
In some ways though, ARA would make the lives of app developers much harder since the variability of what the device could have as hardware would be even worse. Although, because of that kind of hardware variability and dealing with that complexity, there might be less incentive to do custom versions of the Android OS if this became the main form factor of devices. It would, however, would mean being able to better deliver OS updates so there would be less burden of coding for legacy versions of Android to balance the extra hardware complexity.
This year overall had fewer code and technical demos and a lot more focus on design, which seemed like a disappointment to many of those that I talked to. The only truly technical session I attended was for Espresso which is an integration test suite for Android, while I enjoyed the session, it was incredibly hard to actually listen in since so many people were trying to attend and space was tiny. I ended up behind the sign that told you what talk was currently in the room and peering through a gap in the ‘wall’ around the talk. Espresso, by the way, sounds like it makes Android testing a lot better with things like mock intents and more semantic view matchers. (note: I don’t develop much in Android, but this makes me want to play with it again since good testing tools makes programming more enjoyable)
I was also disappointed that while the keynote focused a lot on the next billion mobile users to come online, the design session for it just boiled down to design for slow and intermittent internet. There weren’t guidelines for how to deal with cross-cultural considerations, nor were there improvements to the localization services for Android that had been announced previously.
And That’s All She Wrote
After 2.5 days of the craziness of Google I/O, I’ve come out the other end super excited about the future of VR and modular phones as well as a lot more connected to the women tech community. I am content, and now, I think I’d like to take a nap.