Computer programming: Why we should all learn to hack.


Owning a computer once went hand in hand with understanding exactly how it worked. That may have changed but Tom Chatfield says it’s time to reclaim the past.

hack

There is an old joke amongst computer programmers: “There are only 10 types of people in the world: those who understand binary, and those who don’t.”

Not funny to everyone, but it makes a neat point. We now live in a world divided between those who understand the inner workings of our computer-centric society and those who don’t. This is not something that happened overnight, but it is something that has profound consequences for our future.

Rewind to computing’s earliest decades and being a “hacker” was a term of praise rather than disgrace. It meant you were someone who could literally hack code down to size and get it to do new things – or stop it from doing old things wrong. You were someone who could see through the system and, perhaps, engineer something better, bolder and smarter.

In the early 1970s, Steve Jobs and his co-founder at Apple, Steve Wozniak, worked out how to “hack” the American phone system by using high-pitched tones, so that they could make prank calls to people such as the Pope (he was asleep at the time). It was a mild kind of mischief by modern standards – and a sign of a time in which the once-impenetrable realms of mainframe computers and institutional communications systems were beginning to be opened up by brilliant amateurs.

As you might expect, the phone system has become considerably harder to hack since the 1970s, and the divide between those who use computers and those who program them has also widened as the software and machines have become more complex. Having started out as outposts of do-it-yourself home computing, companies like Apple have become pioneers of seamless user experience, creating apps and interfaces that don’t even demand anything as technical as the use of a keyboard or mouse, let alone insights into the inner workings of the technology involved.

Year of code

This relentless drive towards technology that blends seamlessly into our lives leaves us in an increasingly bifurcated world. Information technology is a trillion-dollar global industry, with legions of skilled workers creating its products. Outside of their ranks, however, the average user’s ability to understand and adapt the tools they are using has steadily declined. It is a situation that is unlikely to change overnight – but there are movements aimed at bridging this gap.

In the coming weeks, a UK foundation will launch the Raspberry Pi – a £16 “computer” aimed largely at schoolchildren. Unlike your tablet or laptop, however, this computer is not a glossy, finished piece of kit, and deliberately so. The credit card-sized, bare bones circuit board is more akin to the early DIY machines that the likes of Jobs and Wozniak created and played with in the earliest days of computing. It demands to be tinkered with or “hacked” – and that is the whole point. It encourages people to better understand the hardware at their fingertips.

Across the Atlantic, meanwhile, a young organisation called Code Academy has made the increasing of people’s understanding of the code that runs on their machines into its mission. With over half a million users registering just during its first month of operation in 2011, Code Academyis a rapidly-expanding service aimed at imparting the basics of coding to anyone wishing to learn, free of charge. Its initial focus is the web language JavaScript, and it is inviting users to make 2012 their “code year” by sending out emailed prompts to complete one interactive coding lesson every Monday.

In professional terms, it’s easy to see why knowing how to put together a program is a valuable skill: more and more jobs require some technical know-how, and the most skilled students have glittering prospects ahead of them. But with only a fraction of those signing up for free lessons ever likely to reach even a semi-professional level of skill, are movements like Code Academy able to offer more than good intentions?

The answer, I believe, is a resounding yes. Because learning about coding doesn’t just mean being able to make or fix a particular program; it also means learning how to think about the world in a certain way – as a series of problems ripe for reasoned, systematic solution. And while expertise and fluency may be hard-won commodities, simply learning to think like someone coding a solution to a problem can mean realising how the reasoned, systematic approaches someone else took might not be perfect – or, perhaps, neither reasonable nor systematic at all.

‘No magical safeguards’

Like Neo’s moment of revelation in the first Matrix movie, learning to picture the code behind the digital services you are using means realising that what you are looking at is not an immutable part of the universe; it is simply a conditional, contingent something cooked up by other human coders. And this is the divide that matters more than any other between coding insiders and outsiders: realising that the system you are using is only a system; that it can be changed and criticised; and that, even if you do not personally have the skills to rip it apart and report on the results, someone else probably does and already has done.

This last point – the ability to benefit from others’ expertise, and to know how to begin searching it out – is an especially important one. From cynical corporations to shadowy spam-mailers, there are plenty of people who would like nothing more than a digital citizenship ill-equipped to ask what lies beneath the surface. Thinking differently does not demand coding mastery. It simply requires recognition that even the most elegant digital service has its limitations and encoded human biases – and that it is possible for more troubling cargoes to be encoded, too.

In 2010, for example, an FBI investigation revealed that one suburban Philadelphia school district had included malicious software on laptopsgiven out to pupils that allowed the computers to be used for covert surveillance via their cameras and network connections. The software in question would have been undetectable to all but the most devotedly expert of investigators. Since the case emerged, however, the widespread documentation and discussion it provoked has left those alert to such possibilities far better prepared to defend against them in future.

Code Academy and its ilk have no magical safeguards to offer or instant paths to understanding. For many people, though, signing up will be a first step towards asking a better class of question about their online world – and searching a little longer and harder for better answers within it.

And in case you are still wondering – 10 is the binary for two.

Source:BBC

 

On Prism, privacy and personal data


News that the US National Security Agency has collected data from major tech firms makes Tom Chatfield wonder: is today’s internet the one we wanted, or deserve?

prism

In the early days of the web, much of the debate around technology’s opportunities and hazards focused on anonymity. Online, as Peter Steiner’s iconic 1993 cartoon for the New Yorker put it, nobody knew you were a dog: you could say what you liked, try out different selves, and build new identities. Privacy was what you enjoyed by default, and breached at your own convenience.

The last decade has seen a startling shift from these origins. As internet-connected technologies have become ever more widespread, the fantasy of a virtual realm set apart from reality has given way to something more messily human. Today, our digital shadows cleave ever-more-closely to our real-world identities, reflecting our interests, activities and relationships. Humanity has flooded online, and largely chosen an augmented rather than an alternate life.

In this context, privacy is not so much a matter of secrecy as of control. From medical details to birthdays, hobbies and hang-ups, there’s little that we don’t reveal in some context. Instead of sketching second selves, most of us share personal information in order to gain value from countless digital services, and expect in return to control how this information is used – and for those using it to do so appropriately and securely.

So, what should one make of the news that major tech firms may have been passing some of this information on to the US National Security Agency (NSA)? Even before the so-called Prism scandal and its associated revelations from whistle-blower and ex-CIA employee Edward Snowden, we had misguided views. Did we really expect businesses whose models are based on gathering unprecedented quantities of data not to squeeze every last drop from their assets; or for the lifelong accumulation of online data about our every action not to hollow out hopes of control? Could we ever have hoped for governments and intelligence services to resist tapping the allure of troves into which so many have freely confessed so much?

The shock of Snowden’s story has partly been offset by “I-told-you-so” accounts along the lines of the above. Coupled to this, however, is an assumption that I find troubling: that the relentless gathering of personal data is simply the nature of online services, and something we must either accept wholesale, or reject alongside technology itself.

The confusion, here, is mistaking a particular business model based on advertising and data aggregation for an eternal truth about “the internet” – as if that existed in any coherent enough form to have a single purpose. It’s a confusion that many of the world’s most successful online businesses have colluded in, and with good reason. For a company whose profitability is based on gathering as much data as possible, the freedom that matters most is the freedom to provide as much information as possible – and for this information to be pooled and preserved indefinitely. The value of being free from the need to do this is anathema.

Blind faith

For the social psychologist Aleks Krotoski, writing in her new bookUntangling the Web, “it may be that our digital shadows will become our marks of trust and reliability; to have none will be a sign that we have something we’re ashamed of, something to hide.” Data gathering therefore becomes a self-fulfilling prophecy: if enough people insist on its power and indispensability, opting out is no longer a straightforward option.

The PRISM scandal suggests just how deeply embedded the cult of data has become at the highest levels of government and national security. In a data-hungry world, even those who are supposed to be guarding liberty seem to believe that the gathering, preservation, cross-referencing and mining of data is the future’s only recipe for civic life and national security alike. It’s a case of escalation on all sides, with every innovation a further opportunity to keep track of everyone and everything in the name of a nebulous good.

If there’s one lesson to be taken from the recent headlines, it’s that this recipe is flawed on every level. Projects like Prism reflect a faith in data that misses the point of what a supple or useful understanding of human-machine interactions looks like – and that blithely equates progress and justice with endlessly accumulating information.

As author Evgeny Morozov dryly tweeted during the coverage of Snowden’s actions, “It’s kind of hard to accept the argument that surveillance and big data work when NSA fails to watch and profile its own employees.” Although they may wield tremendous and alarmingly unaccountable power, the National Security Agency and its ilk are not puppet masters holding the key to modern living. The accumulating impact of so-called big data will be both profound and profoundly unpredictable; but one illusion that urgently needs dismantling is that it will “work” only as anticipated, or that it renders other debates redundant.

Unintended consequences are the rule rather than the exception of vast systems, and the internet is vaster than most: a network of networks already far distant from the last century’s visions of virtuality. Is today’s net the one we wanted, or that we deserve? It’s no one thing, of course. More than ever, though, the freedom to use and choose its best possibilities rests on asking such questions, and on challenging the belief that the “logic” of one promiscuous set of imperatives defines our online destiny.

Source: BBC