Open Meetings With Vidyo

Recently for the Mozilla Gfx and Accessibility meetings we’ve been using Vidyo.  In both cases we filed MoCo IT bugs to get dedicated “rooms”.   This yielded a two benefits:

  • A public url to give out for accessing the room from a device (ie laptop)
  • A dedicated phone conference extension  accessed through the traditional Mozilla Asterisk system by x92

If a meeting is held in a Mozilla Co. room with a Polycom system (Warp Core, Bridge), you don’t need to do anything special, because Vidyo integrates into the Polycom systems too. Just  make sure you call the “<ROOM> Vidyo” item in the directory.  If you need to have a private meeting, the rooms can have PINs as well.

There are a couple of drawbacks with Vidyo:

  • No Linux support
  • Not accessible

However, because of the Asterisk integration, the experience is not degraded for users in desiring those features – they just dial in as before.

Finally inter-continental eye candy via JOEDREW! from last week’s Gfx meeting.   Note the browser with the screen shared agenda in the lower right.

Anyone interested in Gfx and Accessibility meetings is encouraged to join.  Announcements are sent to dev.platform and dev.planning for Gfx and Accessibility respectively.

Posted in Free Software, Uncategorized | Tagged | 2 Comments

WebGL & Security

Recently Context Information Security Limited gathered a lot of attention for a blog post on the state of WebGL security.  For Mozilla, WebGL was first released in Firefox 4, and there are implementations in Chrome, Safari and Opera as well.  The blog post outlines an abstract concern that WebGL is inherently insecure because it allows fairly direct access to the hardware, along with two specific attacks, a Denial of Service and a Cross-Domain Image Theft.

The Denial of Service attack does not generally endanger user data or privacy, but it can be highly annoying for users, not unlike sites that pop up multiple dialogs or have long-running javascript that hangs the browser.  The Khronos WebGL working group has been aware of this type of issue for some time and has discussed it openly.  Shader validation can help somewhat, as can GL_ARB_robustness, but the forthcoming GL_ARB_robustness_2 extension will help even more.  There are also user confirmation approaches available as well depending on what real world data we uncover over time.

The Cross-Domain Image Theft issue is indeed a viable attack.  It had been previously theorized, but no known proof of concept giving meaningful results existed until now.  While it is not immediately obvious that it can be exploited in a practical attack right now,
experience in security shows that this is a matter of when, not if. Mozilla is engaged with browser, OS and hardware vendors in the WebGL mailing list to solve this as quickly as possible.  One solution is simply to disallow the use of cross-domain images that do not have CORS approval in the WebGL context, which currently is Mozilla’s preferred solution.  Mozilla is committed to rolling out a solution that secures against real threats while keeping WebGL a viable platform.

The abstract concern around hardware access is something we (and other browser vendors) have thought about a lot during design and implementation.  For Firefox we first addressed this by employing both a whitelist and a blacklist for drivers.   The blacklist can be deployed daily without a full software update so we can respond rapidly to any issues.  We’ve also been working with driver vendors, both before and after the release, through the Khronos WebGL working group to report and solve specific issues.   Vendors must be active in order to be whitelisted. Longer term, again through Khronos, we continuously work to raise overall vendor awareness and develop more advanced counter measures such as the openGL GL_ARB_robustness_2 extension to mitigate various types of attacks.

WebGL is a powerful technology and we certainly recognize that significant attacks against it may be possible. Nevertheless, claims of kernel level hardware access via WebGL are speculative at best since WebGL shaders run on the GPU and shader compilers run in user mode. We’re keen to work with Context, or any other group, on identifying and
closing specific, concrete attacks as they emerge.

Posted in Free Software | Tagged , | 14 Comments

Mozilla: Two Months

Hard to believe two months have gone by at Mozilla since I joined.  Mozilla is undergoing a lot of change right now because we’re pushing out Firefox 4,hiring like crazy and transitioning to a rapid release cycle so keeping up with the change and learning everything has kept me busy.

The general openness has been great, I only go to one regular multi-person meeting that isn’t public.  Some of the interesting teams/people/projects have I been working with are:

Gfx

The graphics module in Mozilla is responsible for getting the bits on the screen.  With Firefox 4 this uses the layers system.  There are 5 main layer types:

  • Container – only holds other layers
  • Image – image data
  • Color – single color
  • Canvas – HTML canvas
  • Thebes – thebes surface (thebes is the Mozilla drawing API, it maps onto cairo for its implementation)

There are also a couple of specialized layers: Shadow (a proxy to the “real” layer somewhere else) and Readback (to handle windowless plugins properly).

Depending on the operating system and the system capabilities, a layer manager is used to manage the layers.  There are 4 layer managers:

  • D3D10 – Windows 7/Vista
  • D3D9 – XP
  • OpenGL – OS X
  • Basic – Linux and fallback

This is not quite accurate because there is blacklist for various drivers, if a card/driver are blocklisted the basic layer manager will be used.  The layer manager then implements each of the 5 main layer types on top of the 3D rendering technology (except for Basic which is cairo/thebes only) using hardware acceleration where possible.  Once the layers are all rendered, the layer manager composites the results using properties on the layer such as clipping and opacity.

Hardware acceleration is becoming increasing important both to support technologies like WebGL but also for general performance.   Firefox 4 was a big step forward on hardware acceleration, but project Azure will drive this even farther forward.

The Gfx team goals are laid out for Q2 (basically for FF 6 and 7) -

  • Fennec layers acceleration – hardware accelerated compositing  for Fennec using the OpenGL ES layer manager which is part of the OpenGL layer manager but too buggy to turn on but default currently
  • Electrolysis accelerated layers – right now display and rendering happen in the same process, with Electrolysis this is not the case, so in order not to do costly readbacks from the GPU and send the texture over IPC to be sent back to the GPU in the display process, the textures need to be shared between processes where possible
  • NPAPI async drawing extension – accelerated windowless plugin support on Windows
  • Azure – D2D accelerated 2D canvas implementation, first step in rolling out Azure
  • Mac Plugin Async Drawing – Just what it says, end sync plugin drawing on OS X

A11y

A11y is of course responsible for accessibility (on all platforms).  This team was quite small until recently, but is expanding, in particular to drive web accessibility on mobile devices which is in a poor state no matter what browser and OS you are on (well, except for iOS which has recently become good).

The accessibility team is about more than just hacking too.   There is engagement with other open source teams such as NVDA and GNOME, as well as initiatives such as Women of GNOME and the GNOME A11y hackfest.   There is also a grant program for web related accessibility projects.

The accessibility goals are laid out for Q2:

  • Begin electrolysis a11y impl – solving issues about introspecting content when assistive  technologies are operating on the UI process
  • Work with product management to complete mobile functional accessibility requirements and priorities – as above, need to drive a mobile a11y story
  • Make all implemented HTML5 inputs accessible – new HTML spec implementation bits should always be accessible
  • Finish work for accessible text interfaces to include only cached text usage – performance improvement
  • Remove 75% of existing XPCOMery from the accessibility module – part of a general deCOMtamination effort

Orange Crush

I’m also working with Ehsan.  He is driving a great effort called “Orange Crush” to reduce intermittent oranges.  An orange is test failure, there are a lot of these that happen from time to time only.    Test failures are not good of course, but the biggest issue is it wastes time as people have to check that the oranges were intermittent after their changes land and “star” them (indicate they are acceptable) – and it takes a few hours before the tests are complete so you have to stick around when you land something.  As a group intermittent oranges also happen frequently enough that you get a few every landing, which  prevents doing nice things like automated landing from the test build server (the “try server”) to the main build server on successful test build run.

Rapid Release

Mozilla is transitioning from a ship-it-when-its-done model to a continual release train model that will ship every 12 weeks with an option to ship every 6 weeks.  This involves a lot of re-working of processes, tools, engineering habits, marketing and everything else that goes along in a release.   Biggest of all is of course the mental shift to acknowledge that things really can wait 6 more weeks (instead of 12+ months like in FF4!).  Overall this process should help to deliver features to users and web developers sooner and drive the open web more quickly.

Lots else to do like recruiting, 1:1s, goal setting, meetings, but a great environment to do it in.

Posted in Uncategorized | 8 Comments

Last Day at Novell

Today is my last day at Novell which I arrived at by way of Helixcode and then Ximian in 2003.   Both these companies enabled me to work in and around open source first as a hacker and then later as a manager and director.  Perhaps future posts will have some more reflections on this time.  It has been a great and I learned so much over the years, but it is now time to move on.

Posted in Life | 28 Comments

Sleeping on a Couch: Transferring Culture

In the first half of 2009 the Preload department at Novell was building a team in Taiwan.  There were two main reasons for this – our customers (the OEMs and ODMs) were located there and we wanted to be near them and the first question a customer in Taiwan always seemed to be “how many people do you have here”.  Local support in the native language backed with a large team is very important to companies in Taiwan.

We hired excellent people who were both experienced Linux engineers and people straight out of university.  However, all were pretty inexperienced working with open source communities and I had a perception that any previous workplaces they were at in Asia was more likely to be hierarchical in nature where open communication was discouraged because you don’t question the boss.  I believe this type of work place leads to the surfacing of issues until its way too late to solve them and leads to sub-optimal problem solving.  I wanted to ensure that the new team understood open source was a key component of our work and that open communication was important both internally and externally in support of this.

This type of situation is not one I would have thought about at all when I first became a manager, but a couple of prior experiences (including failure) suggested this was something I could and should address.  In particular the OpenOffice “indoctrination” about 4 years ago when we were expanding the team.  At that time I managed the OpenOffice team at Novell and Michael Meeks interviewed everyone we hired during the expansion and many of them spent 1-2 nights sleeping on his couch in the UK or getting trained in the Toronto office in person.  Due to this, that team (now the team working on LibreOffice) to this day cares a lot about tenacious fixing of customer problems, reducing code duplication/bloat and particularly about building a great community around the project.

So for Taiwan Greg KH, Michael Meeks, Aaron Bockover and Stefan Dirsch all visited the office within a year to cover engaging with community, supportability (no one time throw away code here!), upstreaming commitment to debug and find the real root cause and more.  This had two major benefits.  First the culture of open source and open investigation into problems was transmitted by people who lived it.  Second communication pathways were built so that the engineering team in Taiwan felt comfortable asking questions and had people they had met to ask the questions to, without needing “big boss” (me) to facilitate or hear potentially “dumb” questions.   So what do we have now?  Those previously inexperienced with open source engineers who are now proposing, submitting and maintaining code upstream.

(BTW Greg is really great at the kernel piece of this and was able to help Ralink in a similar manner with these two items as well – in fact Novell is happy to help any component vendor this way).

Don’t get me wrong, you can’t just hire anyone and expect to imbue them with your organization’s culture, you have to have to get people that are interested in and receptive to the culture.   For instance its unlikely every single Facebook engineer was previously part of a culture of shared code base ownership and review that required them to be in the room to fix bugs on the fly or allowed them to change and submit code to any part of the app or required checkin review.  These are cultural pieces that are transmitted post hire, but you still have to pick people who are in general receptive to that culture.

I will always think about training engineering people in a cultural context not just in a technical manner now, particularly when building new teams or offices since it will be tougher for them to get it by osmosis.   Future communication connections are built and you display the culture you have and want to have in the organization.

Posted in Development, Management | 1 Comment

Litterbox: A UX Parable

Laundry room

Several years ago in a our previous house my wife and I noticed a marked decrease in the amount of laundry being done.  We weren’t any busier nor had anything changed radically in our lives since the drop off.

Our laundry room was laid out as seen in the diagram.  It was in a walled in porch of a 125 year old home and was quite narrow, no more than 6 feet total, and with storage and other items lining the walls, much less walkable space.   There were two doorways, one to the kitchen and one to the outside.  If you were doing laundry you entered through the kitchen doorway and walked along the green dotted path to the washer and dryer.  The solid black bar was the baseboard heater and the red box was the litter box for cat, a covered model with the door facing the path to the washer and dryer.

When my wife and I discussed the laundry drop off she immediately pointed to the fact that it felt a little gross walking to the washer and dryer because of the little grains of litter on the floor the cat tracked out of the litter box right into the path.  It would stick to your socks and you had to brush it off when done.  Ugh.  We are not obsessively clean people and we were vacuuming that room every couple of weeks but the problem would return within a day.  We weren’t starting loads of laundry because of this.

Now, the litter box faced what I would call the conventional way because the handle to lift the top off was oriented for a person facing the door of the box use (for cleaning).  This seemed the obvious way to do it at the time because everything else also faced into the path like this and there was a natural path for the cat to enter the door following the human path to the washer and dryer.  I thought about moving the litter box  but I could think of no other good room to put it in and the basement door we had to keep closed.   Then a minor bit of inspiration (inspiration likely too strong of a word) hit me – why not turn the litter box 90 degrees?

Useful but not so big change

It would be a bit more awkward for the cat to get in, be a bit more awkward for the human to clean and be non-conformist regarding the orientation of everything else in the room but ultimately clean underwear were much more important and laundry was a more frequent operation than cat litter cleaning.  And it worked, tiny pieces of litter still came out, but not into the path.

I think there are a few user experience takeaways from this:

  • Simple changes can be powerful – this change took 5 seconds once it was decided upon
  • Comfort of the user is important – the little bits of litter weren’t even that bad, but it sure made an “icky” feeling
  • Optimize for the most important behaviour  and the users doing it – doing laundry was far more important than the convenience of the cat, and the cat could still carry out its tasks

And also, talk to your users for information!

Posted in Development, Life | Tagged , | 3 Comments

Git – Micro Commits and Workflow

Recently I moved my blog and some old posts about git got re-posted to Planet GNOME.   One in particular was where I was not sold on the micro commit model of git at the time (more than 3 years ago!).

I postulated that more detailed, longer commit messages are good for others to understand the changes you made in the future.   However with git you can easily write a more detailed message at a later date (by re-basing, altering commit, more details in the merge message, etc) before the code is pushed or submitted as a patch – and only if its really needed.  This means you don’t have to interrupt your local workflow which allows you to code fearlessly.  SVN and CVS made commits a heavy weight process that required more care which led me to be biased in this area at the time.

I do believe it is import to have a good model for your git workflow though, I’ve been trying a work flow publish by Vincent Driessen and while its a bit of overkill for an individual project, the steps are easy to remember and well defined.  Only thing I’m not sure of is if merging without fast forward will cause problems over time, but in the short term grouping commits and keeping that knowledge is useful.

Posted in Development | Tagged | 5 Comments