New report, by Iran, on Iranian Youth Proxy Use

This story from @maasalan on Global Voices, directly ties into my research at the Berkman Center on potential prosecution of end-users for using encryption and circumvention technology on their mobile phone. While “severe punishment” is possible, it seems to rarely happen in Iran:

In a report conducted by Iran’s Ministry of Youth and Sports, the Iranian government announced that of 23.5 million youth using the Internet, 69.3 percent of them are using circumvention technology such as proxies and VPNs — virtual private networks that provide access to the “global Internet”.

At the moment, Iranians often encounter a firewall when trying to access websites that appear antagonistic towards the government or the nation’s Islamic ideals. The report did not make mention of the legality of circumvention tools. But according to Iran’s list of Computer Crimes, the distribution of both circumvention technology and instructions to use such tools are both illegal. Violating these laws can result in severe punishment.

http://globalvoicesonline.org/2014/09/16/nearly-70-percent-of-young-iranians-use-illegal-internet-circumvention-tools/

Published
Categorized as Awareness

(Informal/open) Mobile Security Clinic today @ Berkman 4:30-5:30pm

I am informally launching my weekly hands-on mobile security clinic today at Berkman, around 4:30pm, in the Fellows conference room at 23 Everett.

While some might say a mobile phone is only secure once its been microwaved, smashed by a hammer, and buried in concrete, the truth is, most of us can’t escape the shiny, buzzing tracking device in our pocket.

What I can offer are some, free, practical solutions, that can go along way in reducing the likelihood that what you do on your mobile will get hoovered up into an never expiring log somewhere, or plastered across 4chan. Whether you want to encrypt your calls, messages or photos, ensure sensitive personal or project information is not leaking to any app that asks for it, or deal with more advanced concerns related to surveillance or proprietary app ecosystems, I am happy to go there, and find a solution, if it exists.

If you want a small idea of some of the solutions I can offer, visit this link: https://guardianproject.info/howto/

In return, I get to hear your stories and challenges, as well as aspirations for what a brighter, more secure mobile computing future might be. Like I said, this is a weekly effort, and these types of interactions are a key part of my work as a Fellow here this year.

Assessing the Impact of Five Years of Mobile Security Problem Solving (and Planning for Five More…)

Below is the text of my successful application to the Berkman Center 2015 Fellows program, including the concept for my fairly ambitious project that I look forwarding to finding some allies and collaborators on during the year.

***

In a recent leak from the Snowden files, one of the mobile security apps I have developed, Orbot (Tor for Android), showed up in an NSA powerpoint slide explaining the different forms that the Tor anonymity and circumvention software takes. Next to the app’s name was a comment that stated it was “easy to use!”. It was a strangely gratifying moment to know that I had done a good enough job building a mobile version of Tor that it both showed up on the radar of an NSA analyst, and that it merited a positive comment about its usability. It also triggered a good deal of reflection on the impact my efforts were having in the world, and just who was paying attention out there.

It was in the Fall of 2009 that I began work on the Guardian Project, an effort to research and develop open, free security software for mobile devices, with a particular focus on solving problems for people living and working in high-risk, high-surveillance situations. I had recently seen a group of my friends working as undercover journalists in a hostile country, get tracked down, arrested and temporarily imprisoned due to use of their mobile phones to organize and communicate. I was determined to come up with software that would defend against such an situation occuring again in the future. I knew the undertaking was significant, and so I set my horizon five years out, and came up with a feature roadmap that I hoped to fulfill.

That milestone is now looming, and coincidentally it also times well with the beginning of this fellowship opportunity at Berkman. At this point in the project, I and my team have developed and release a number of open-source apps for Android, and recently iOS, that enable encryption and circumvention features for voice calls, mobile messaging and mobile web access. We’ve also come up with some clever ideas like a camera app that automatically blurs faces detected in a photo. There have been millions of downloads, resulting in a hundreds of thousands of active users, around the globe. We have received grant funding from a diverse set of sources, recruited a brilliant team of talented engineers and designers, and generally done well delivering on our promises. The original feature roadmap I set out to build, has largely been fulfilled.

I seek then, some time, a context and community in which to reflect on the work I have done, to asses its merit, worth and impact, and to begin planning for the next five years. Beyond a collection of really amazing, moving emails and anecdotes from real users in difficult places, I still have trouble answering “Who are we helping, and how much?”. I want to ensure we are doing more harm, than good, and that we are actually reaching the types of users we hoped to in the beginning. I seek to understand better the different global, legal, and cultural contexts in which tools for privacy, security and expression are utilized for social change. This can be easily boiled down to questions I often receive when I am giving a mobile security training in some far flung location in the world – “Is this legal for me to use?” and “Can I be arrested for having this on my phone?”. While there is no simple answer, it is also true that there is a huge disconnect between the Internet idealists perspective “If it is not legal, it should be, so you should use it anyways”, and the on the ground reality of being detained and incriminated because of some digital bits in your pocket.

While the tool builders goal is to develop and provide a tangible tool for someone to fight back against oppression and corruption with, they are often unwittingly turning those they want to help into practioners of a type of civil disobedience without explaining to them what the risks of that are. Does the net benefit of the increased mobile privacy, ability to avoid traffic surveillance, and to general keep your plans and dreams confidential to yourself and others you trust, a net positive benefit, versus the increased scrutiny or exposure to incrimination by association one might face? Is it actually safer and more powerful for an activist or organization to operate transparently, in the open, and not expect to have any communications privacy outside of close physical proximity?

These type of questions need to be both researched and explored within an authoritarian state context, as well as within our own democratic (self-inflicted?) surveillance states, as increasing lobbying pressure from law enforcement on legislation might turn my team and I into outlaws quite soon. In other words, the axom “No one has ever been arrested for using Tor” may need to be refreshed soon. The concept of “lawful intercept” is a globally fungable term more better expressed as state-required eavesdropping for corporations seeking to do business in a certain region. Whether the interception is just or not, is the important question, when seeking to develop and deploy tools that improve and empower a community of users.

During my fellowship, I hope to reach out to legal and research resources within the Berkman community to assist in building a global map overlaying lawful intercept laws and capabilities with the robustness of the larger rule of law. Additional layers of data could include records of persecution based on possession or use of cryptography or other advanced communication tools, whether real name registration is required for mobile network use, data on user groups in the area that are known to be using mobile security tools, and information about surveillance infrastructure known to be use at telcos and internet service providers in the region. If possible, details on collaboration or collusion by corporate communications hardware and software companies could also be useful to display. I see this resource both as an effort to bring a spotlight on these issues, and as an active resource for any advisor, trainer, activists or journalists traveling to an area, who wants to understand the challenges they might face in using a particular type of software, or promoting its use to local communities.

For example, as a journalist working in a region, I might want to know if I should encourage my sources to use mobile security software that would protect my communications with them, but also increase their chances of coming under greater scrutiny by network operators? If I am a labor organizer supporting exploited workers, I also need to make sure I don’t radically increase the chance they will lose their job or be otherwise because they got caught using an app. I will research and document these type of user stories, and test them against the resources, to understand the value of this research.

I want the software I develop to work, and to be helpful, useful and empowering. I do not want to just solve for threat X, and not think properly about threats Y and Z. I also know that my work is just one small part of a sea of solutions both free and commercial, attempting to enhance privacy and security for mobile users. The work I am proposing for this fellowship aims to help that larger community of tool builders to think about the use, deployment and realization of their efforts in a more complete way, so that the result can be what we all hope for. It also aims to ensure our users can make the best decisions about the threat they face, and whether or not using a piece of mobile communications software is ultimately beneficial for their situation.

Finally, I envision the output of this work not to be a static report, but a dynamic, shared dataset, that any website or application could clone or tap into. I would ideally also develop a default mobile website or app that would give users a “sixth sense”, warning them of potential risks, by cross-refering their devices network operator, geographic location, and installed applications, with the data available in the networked mobile security risk database.

I cannot think of a better place to pursue this work than at the Berkman Center, within a community of fellowship to help tune, improve and realize this complex effort. I expect there to be good amount of overlap with other communication infrastructure mapping efforts. I also realize that there exists a great deal of expertise well beyond my own into the legal aspects of the issue. This work would greatly benefit from access to these efforts and skills, and I from a supportive network of like-minded colleagues, and thus humbly ask for your consideration of my application.

Engaging in Process over Product with Software for Social Change

In the last week, I experienced two completely opposite reactions, from two different partner organizations, to what was nearly the same discussion about how to proceed with the research and design of a mobile solution for a real-world human rights and internet freedom context. I wanted to reflect on these here, as I prepare to head west to Non-Profit Technology Conference in San Francisco to accept the 2012 Antonio Pizzigati Prize for Software in the Public Interest. I wish I had the chance to know Tony Pizzigati, but in lieu of that, I’ll do my best to represent the spirit in which his family honors him through this award. I also think that we would have gotten a long well, both as precocious kids hacking on neat problems at an early age, and as young adults eager to make an impact on the world.

What is most important to state is that the vast majority of what I have been able to accomplish in my efforts to apply technology solutions to social change needs, was due to relationships built, trust earned, support requested and problems presented, by real people in need of help to solve real problems. While some might see my work with the Guardian Project as the realm of myopic, open-source hackers locked away in a room trying to realize a crypto-anarchic nirvana, the truth is far from it. In truth, we have spent as much time talking with people, working through problems, proposing and testing solutions and  dealing with the true drudgery of real progress, over the last two years, as we have in front of our keyboards and screens.

standing on the roof of the world, with a mobile gadget in hand

For now, back to our previously mentioned partners. The first partner, after working on a variety of projects and proposals with them for the last year, clearly enunciated back to me, the exact view I hold on how the relationship between a non-profit and a software tool developer, should work. They said, and I paraphrase, “You see what we are doing here, together, is a process, trying to understand what it is that should be done, and for who, before we do it, and we need to communicate that to our larger community.” In this context, driven mostly by our partner, we were trying to understand the type of mobile technology available to their target community, and the existing interests people had in using mobile technology. It should be noted that this community is spread around the planet, separated by various cultural issues and dialects.

The other partner, an admittedly much newer acquaintance, was extremely confused when my team and I proposed a “phase 0” that would help us begin the collaborative process that we saw the entire partnership engagement becoming. Instead, we were told that this would not work, and that we were the experts, and should provide a full specification of what we proposed to build, and they would review and hopefully approve that, and then we would build it. As long as the spec was approved, and we built to it, the partnership would be determined to be a success.

I share these two extremes because they will help demonstrate to me a few points about the difference between developing software in a non-profit, social change context, versus a corporate, commercial or traditional software consultancy.

To some, especially those used to working in a typical client-consultant environment, the example of the first partner sounds like a disaster waiting to happen. Unclear goals, extended periods of open-ended discussions, too many stakeholders, and what is essentially a long “spec” phase. Even for those used to working in a non-profit environment, where budgets are traditionally tight, there is rarely the luxury to spend too much time engaging in this way. Still others might see the first partner as an easy gig, someone to milk money out of, while not really delivering anything substantial.

The second situation, might alternatively seem like an ideal one, where you have full reign to implement a nearly turn-key solution, based on existing components, and drive the process to maximum advantage and benefit. The less you are told by them, the better, because this is an opportunity to fund your vision for what you think they need. If the partner, truly just a client in this manner, has any issues, they should have raised them at the beginning, and it will cost dearly for any change to the plan down the road.

It may surprise you then, or it might already be obvious, that from my perspective, the first partner is the ideal one to work with in a social change context, while the latter is a much greater challenge. This is because our goal is to actual make change happen, and not just complete the client engagement successfully. Our duty is to determine if technology can plan a role in addressing a need, and if not, to walk away. The best way to work is to constantly revise what you are building, based on the latest information, feedback from the ground, and to constantantly iterate and tweak what you are supposed to be building. The second partner mostly just wants you to deliver on a contract, and when that is done, perhaps there will be an additional support contract for bug fixes, or another RFP to respond to.

Even more importantly, the partner, the people in need, should feel like they have a share in ownership of the process, and that the resulting product is as much theirs as yours. Rarely will the first version of what you develop be the big breakout hit or complete solution you expect it to be. All must be prepared to continue on a road map that includes multiple releases over a decent amount of time, that takes in account time it will take for users to adopt and share the tool. This could be a few months, or a couple of years. To support this, the partner should engage in the effort with the willingness to commit to ongoing support of not just the financial commitment, but the spirit of the project. If the effort is a core part of their plan, their campaigns and their process, the likelihood of adoption and overall success will be much higher.

Underlying all of this, is that, when you are doing this work as freely licensed, open-source software, the solutions you implement need to be more than just opaque, black-box products. To be truly open, and not just a dump of source code, they must be designed in a modular way that promotes re-use, be well documented, properly licensed, and shared in an easy to access public site. They should, when at all possible, make use of existing code from other projects, such that you work more efficiently, and support efforts of other tool developers and non-profits who support them. There should be some attempt to engage a community of developers and users around the code base, such that sustaining the work extends further than just the amount of money you can pay someone to bug fix it. Again, this is all perhaps counter-intuitive to a traditional consultant model, where investing time and energy into code you are going to give away for free, while also re-using other peoples code to reduce the amount you have to charge, does not always compute financially.

I will wrap this up by making a request to all of those eager hackers, developers, designers, consultants, companies and corporations out there, who have in the last few years begun to realize that doing work that does some good, can be a good thing for their businesses and reputation. Even if you come to this world of social change with the best of intentions, the process you may engage in may not be compatible or mutual beneficial to those you are trying to help. Please take a step back, and think about your goals, commitments, and the ability for the work to be sustained beyond this one hackathon, camp, event, or pro-bono engagement, before you promise to change the world.

Many thanks to the Pizzigati family for their support of my own personal attempt to change the world, slowly, one mobile phone at a time.

 

Mobile Security Audit Icons v1

I’ve been thinking about some ways to improve a user’s understand or perception of what an app or service does or does not provide in the way of security, privacy or protection. This work is inspired by other efforts, including Mozilla’s Privacy Icons and the television and video game labeling standards. I think it is time that developers come up with a way to accurately communicate the benefits and risks their app brings, especially one it comes to personal or sensitive information, or users in high risk situations.

I began by breaking down the areas of possible concerns into three groups: User Identity (including location), Network Connectivity, and Data Storage & Access. These represent, collectively, who and where you are, how and when you are connecting and what you are accessing or sharing. I came up with a brief description of the positive and negative impact an app or service could have in each area. I then designed a basic icon for each, came up with a color scheme and a matching positive or negative charge indicator.

The goal of the icon design below is to indicate whether an app or service deals with these three areas of possible concern in a positive (go green!) or negative (warning yellow!) way. Very rarely will an app address all three, though sometimes used in combination a
solution can be made to do so. In some cases, an app might provide a benefit in one area,
while proving detrimental in another. We might also include one or two more icons to indicate how the security of the app was verified, a + meaning open-source, fully commercially audited, and a – meaning it only has a “trust us” model for security.

I hope to begin using these to label the apps and libraries provided by the Guardian Project to help better educate our users. If there are similar existing ways to label apps out there, we would be happy to consider them. Otherwise, please provide feedback below, or steal our cc-licensed SVG file, and make your own variations.

Mobile App Audit Icons SVG