Frustrated Vegan

shub copy

Picture: Is this shub-niggurath oozing ‘shellac/confectioner’s glaze’?  Probably!

Even after eating ‘this way’ for so long, some things slip by.  I found out today that my favorite gum, which has 100mg of caffeine per piece in it and is in the new Army MRE’s; is coated with confectioner’s glaze.

See, food companies have to use ‘layers of abstraction’ to disguise food products that may otherwise seem unappealing.

Confectioner’s glaze is a derived from or is type of shellac.  Shellac will often be described as a resinous coating found on certain trees in asia.  If you keep looking, and maybe stumble across a link or article geared towards food chemists or professional cooks then you find out the truth:

that shit is beetle juice!

It is what the female lac beetle secrets when it metabolizes tree sap.

So…yeah.  Gross.  I don’t eat honey either.  I think that insects are proof that:

A- There is a god.

B- He hates us.

Bugs…I don’t want to touch them or see them, much less eat something secreted from their bodies!

Damnit.  No more caffeine gum for me.

Time to go through my super secret sneaky snack stash again to check the labels!

Indeed

I woke up to a false alarm

and headache, red-eyes rolling,

though you sang me to sleep

with stuttering breath and whisper,

a touch of tongue and light

mimic of wind and moon

but gentler,

no less ephemeral than your transient nature

so varied,

likewise, gone with the thoughts

that bring you to me

a giant in my world-

sustain the lie

I’ll not be consigned

to ignore unpleasant truth

settle-down to drinkful oblivion

when you have so much to give me!

magnum opus indeed

more like a post-it note

scribbled with tense disagrement

a prayer for the dinner table

take me quickly, 

or take me with flood

but not with words or ideas

that show me the sickness of my own heart

because I am no servant

of even my own interests

and have grown weak, brittle

with amusement and lust

my sugar-spun bones

support a falsehood of form

and movement comes,

when it does,

with a vertigo doze

if I could allow my fall I could rest

an earth bed no more harsh than ‘home’

take me for good, 

or spear me with cruelty

and still, eventually

speak the number of my sin

my debt to you is the least of it-

is my failed dream, my deepest love,

my ersatz remorse, and my regret in hand

I can only show you the phantom actions

animated by thoughts

and named by so many gorgeous words

that you never believed anyhow

A Morally-Confused Marine

By Dennis Prager

Tuesday, February 05, 2013

Last week, the Washington Post published an opinion piece by a Marine captain titled, “I Killed People in Afghanistan. Was I Right or Wrong?”

The column by Timothy Kudo, who is now a graduate student at New York University, is a fine example of the moral confusion leftism has wrought over the last half century. Captain Kudo’s moral confusion may predate his graduate studies, but if so, it has surely been reinforced and strengthened at NYU.

The essence of Mr. Kudo’s piece is that before he served in Afghanistan he was ethically unprepared for killing, that killing is always wrong, and that war is therefore always wrong.

–“I held two seemingly contradictory beliefs: Killing is always wrong, but in war, it is necessary. How could something be both immoral and necessary?”

The statement, “killing is always wrong,” is the core of the captain’s moral confusion.

Where did he learn such nonsense? He had to learn it because it is not intuitive. Every child instinctively understands that it is right to kill in self-defense; every decent human being knows it was right to kill Nazis during World War II; and just about everyone understands that if Hitler, Stalin and Mao had been killed early enough, about one hundred million innocent lives would have been saved.

How is it possible that a Marine captain and graduate student does not know these things? How can he make a statement that is not only morally foolish but actually immoral?

The overwhelmingly likely answer is that Captain Kudo is a product of the dominant religion of our time, leftism. And one important feature of the left’s moral relativism and moral confusion is a strong pacifistic strain.

–“Many veterans are unable to reconcile such actions in war with the biblical commandment ‘Thou shalt not kill.’ When they come home from an environment where killing is not only accepted but is a metric of success, the transition to one where killing is wrong can be incomprehensible.”

I give Captain Kudo the benefit of the doubt that he does not know that the commandment in its original Hebrew reads, “Thou shalt not murder,” not “Thou shalt not kill.” The King James translators did an awe inspiring job in translating the Bible. To this day, no other English translation comes close to conveying the majesty of the biblical prose. But the Hebrew is clear: “Lo tirtzach” means “Do not murder.” Hebrew, like English, has two primary words for homicide — “murder” and “kill.”

Murder is immoral or illegal killing.

Killing, on the other hand, can be, and often is, both moral and legal.

In order to ensure that no more Marines share the captain’s moral confusion, the Marine Corps should explain to all those who enlist that the Bible only prohibits murder, not killing. It should further explain that killing murderers — such as the Nazis and Japanese fascists in World War II and the Taliban today — is not only not morally problematic, it is the apotheosis of a moral good. Refusing to kill them means allowing them to murder.

–“This incongruity can have devastating effects. After more than 10 years of war, the military lost more active-duty members last year to suicide than to enemy fire.”

As we have seen, there is no “incongruity” here. And if so many members of the American military believe that it is so “incongruous” to kill the moral monsters of the Taliban — the people who throw lye in the faces of girls who attend school (and shoot them in the head if they’re outspoken about the right of girls to an education), who murder medical volunteers who give polio shots to Afghan children and who stone women charged with “dishonoring” their families — that they are committing suicide in unprecedented numbers, we have a real moral crisis in our military.

–“To properly wage war, you have to recalibrate your moral compass. Once you return from the battlefield, it is difficult or impossible to repair it.”

You only “have to recalibrate your moral compass” if you enter the military with a broken moral compass — one that neither understands the difference between murder and killing, nor how evil the Taliban is.

–“War makes us killers. We must confront this horror directly if we’re to be honest about the true costs of war.”

Other than the author, are there many Americans who enter the military in time of war without confronting the fact that they are likely to kill? Furthermore, it is not “war” that makes us killers; it is the Taliban. We kill them in order to protect Afghans from Taliban atrocities, and to protect America from another 9/11.

–“I want to believe that killing, even in war, is wrong.”

Why would anyone want to believe that? Were the soldiers who liberated Nazi death camps “wrong?”

“The immorality of war is not a wound we can ignore.”

With all respect, I would rewrite this sentence to read: “The moral confusion of a Marine captain is not a wound we can ignore.”

Every American is deeply grateful to Captain Kudo for his service on behalf of his country, and on behalf of elementary human rights in Afghanistan. I have to wonder, however, why, given his belief that killing is always wrong, Timothy Kudo ever enlisted in the Marines.

On the other hand, he will fit in perfectly at NYU.

Shut up and play nice

Shut up and play nice: How the Western world is limiting free speech

By Jonathan Turley, Published: October 12

Free speech is dying in the Western world. While most people still enjoy considerable freedom of expression, this right, once a near-absolute, has become less defined and less dependable for those espousing controversial social, political or religious views. The decline of free speech has come not from any single blow but rather from thousands of paper cuts of well-intentioned exceptions designed to maintain social harmony.

In the face of the violence that frequently results from anti-religious expression, some world leaders seem to be losing their patience with free speech. After a video called “Innocence of Muslims” appeared on YouTube and sparked violent protests in several Muslim nations last month, U.N. Secretary General Ban Ki-moon warned that “when some people use this freedom of expression to provoke or humiliate some others’ values and beliefs, then this cannot be protected.”

It appears that the one thing modern society can no longer tolerate is intolerance. As Australian Prime Minister Julia Gillard put it in her recent speech before the United Nations, “Our tolerance must never extend to tolerating religious hatred.”

A willingness to confine free speech in the name of social pluralism can be seen at various levels of authority and government. In February, for instance, Pennsylvania Judge Mark Martin heard a case in which a Muslim man was charged with attacking an atheist marching in a Halloween parade as a “zombie Muhammed.” Martin castigated not the defendant but the victim, Ernie Perce, lecturing him that “our forefathers intended to use the First Amendment so we can speak with our mind, not to piss off other people and cultures — which is what you did.”

Of course, free speech is often precisely about pissing off other people — challenging social taboos or political values.

This was evident in recent days when courts in Washington and New York ruled that transit authorities could not prevent or delay the posting of a controversial ad that says: “In any war between the civilized man and the savage, support the civilized man. Support Israel. Defeat jihad.”

When U.S. District Judge Rosemary Collyer said the government could not bar the ad simply because it could upset some Metro riders, the ruling prompted calls for new limits on such speech. And in New York, the Metropolitan Transportation Authority responded by unanimously passing a new regulation banning any message that it considers likely to “incite” others or cause some “other immediate breach of the peace.”

Such efforts focus not on the right to speak but on the possible reaction to speech — a fundamental change in the treatment of free speech in the West. The much-misconstrued statement of Justice Oliver Wendell Holmes that free speech does not give you the right to shout fire in a crowded theater is now being used to curtail speech that might provoke a violence-prone minority. Our entire society is being treated as a crowded theater, and talking about whole subjects is now akin to shouting “fire!”

The new restrictions are forcing people to meet the demands of the lowest common denominator of accepted speech, usually using one of four rationales.

Speech is blasphemous

This is the oldest threat to free speech, but it has experienced something of a comeback in the 21st century. After protests erupted throughout the Muslim world in 2005 over Danish cartoons depicting the prophet Muhammad, Western countries publicly professed fealty to free speech, yet quietly cracked down on anti-religious expression. Religious critics in France, Britain, Italy and other countries have found themselves under criminal investigation as threats to public safety. In France, actress and animal rights activist Brigitte Bardot has been fined several times for comments about how Muslims are undermining French culture. And just last month, a Greek atheist was arrested for insulting a famous monk by making his name sound like that of a pasta dish.

Some Western countries have classic blasphemy laws — such as Ireland, which in 2009 criminalized the “publication or utterance of blasphemous matter” deemed “grossly abusive or insulting in relation to matters held sacred by any religion.” The Russian Duma recently proposed a law against “insulting religious beliefs.” Other countries allow the arrest of people who threaten strife by criticizing religions or religious leaders. In Britain, for instance, a 15-year-old girl was arrested two years agofor burning a Koran.

Western governments seem to be sending the message that free speech rights will not protect you — as shown clearly last month by the images of Nakoula Basseley Nakoula, the YouTube filmmaker, being carted away in California on suspicion of probation violations. Dutch politician Geert Wilders went through years of litigation before he was acquitted last year on charges of insulting Islam by voicing anti-Islamic views. In the Netherlandsand Italy, cartoonists and comedians have been charged with insulting religion through caricatures or jokes.

Even the Obama administration supported the passage of a resolution in the U.N. Human Rights Council to create an international standard restricting some anti-religious speech (its full name: “Combating Intolerance, Negative Stereotyping and Stigmatization of, and Discrimination, Incitement to Violence and Violence Against, Persons Based on Religion or Belief”). Egypt’s U.N. ambassador heralded the resolution as exposing the “true nature” of free speech and recognizing that “freedom of expression has been sometimes misused” to insult religion.

At a Washington conference last yearto implement the resolution, Secretary of State Hillary Rodham Clinton declared that it would protect both “the right to practice one’s religion freely and the right to express one’s opinion without fear.” But it isn’t clear how speech can be protected if the yardstick is how people react to speech — particularly in countries where people riot over a single cartoon. Clinton suggested that free speech resulting in “sectarian clashes” or “the destruction or the defacement or the vandalization of religious sites” was not, as she put it, “fair game.”

Given this initiative, President Obama’s U.N. address last month declaring America’s support for free speech, while laudable, seemed confused — even at odds with his administration’s efforts.

Speech is hateful

In the United States, hate speech is presumably protected under the First Amendment. However, hate-crime laws often redefine hateful expression as a criminal act. Thus, in 2003, the Supreme Court addressed the conviction of a Virginia Ku Klux Klan member who burned a cross on private land. The court allowed for criminal penalties so long as the government could show that the act was “intended to intimidate” others. It was a distinction without meaning, since the state can simply cite the intimidating history of that symbol.

Other Western nations routinely bar forms of speech considered hateful. Britain prohibits any “abusive or insulting words” meant “to stir up racial hatred.” Canada outlaws “any writing, sign or visible representation” that “incites hatred against any identifiable group.” These laws ban speech based not only on its content but on the reaction of others. Speakers are often called to answer for their divisive or insulting speech before bodies like the Canadian Human Rights Tribunal.

This month, a Canadian court ruled that Marc Lemire, the webmaster of a far-right political site, could be punished for allowing third parties to leave insulting comments about homosexuals and blacks on the site. Echoing the logic behind blasphemy laws, Federal Court Justice Richard Mosley ruled that “the minimal harm caused . . . to freedom of expression is far outweighed by the benefit it provides to vulnerable groups and to the promotion of equality.”

Speech is discriminatory

Perhaps the most rapidly expanding limitation on speech is found in anti-discrimination laws. Many Western countries have extended such laws to public statements deemed insulting or derogatory to any group, race or gender.

For example, in a closely watched case last year, a French court found fashion designer John Gallianoguilty of making discriminatory comments in a Paris bar, where he got into a cursing match with a couple using sexist and anti-Semitic terms. Judge Anne-Marie Sauteraud read a list of the bad words Galliano had used, adding that she found (rather implausibly) he had said “dirty whore” at least 1,000 times. Though he faced up to six months in jail, he was fined.

In Canada, comedian Guy Earle was charged with violating the human rights of a lesbian couple after he got into a trash-talking session with a group of women during an open-mike night at a nightclub. Lorna Pardysaid she suffered post-traumatic stress because of Earle’s profane language and derogatory terms for lesbians. The British Columbia Human Rights Tribunal ruled last year that since this was a matter of discrimination, free speech was not a defense, and awarded about $23,000 to the couple.

Ironically, while some religious organizations are pushing blasphemy laws, religious individuals are increasingly targeted under anti-discrimination laws for their criticism of homosexuals and other groups. In 2008, a minister in Canada was not only forced to pay fines for uttering anti-gay sentiments but was also enjoined from expressing such views in the future.

Speech is deceitful

In the United States, where speech is given the most protection among Western countries, there has been a recent effort to carve out a potentially large category to which the First Amendment would not apply. While we have always prosecuted people who lie to achieve financial or other benefits, some argue that the government can outlaw any lie, regardless of whether the liar secured any economic gain.

One such law was the Stolen Valor Act, signed by President George W. Bush in 2006, which made it a crime for people to lie about receiving military honors. The Supreme Court struck it down this year, but at least two liberal justices, Stephen Breyer and Elena Kagan, proposed that such laws should have less of a burden to be upheld as constitutional. The House responded with new legislation that would criminalize lies told with the intent to obtain any undefined “tangible benefit.”

The dangers are obvious. Government officials have long labeled whistleblowers, reporters and critics as “liars” who distort their actions or words. If the government can define what is a lie, it can define what is the truth.

For example, in Februarythe French Supreme Court declared unconstitutional a law that made it a crime to deny the 1915 Armenian genocide by Turkey — a characterization that Turkey steadfastly rejects. Despite the ruling, various French leaders pledged to pass new measures punishing those who deny the Armenians’ historical claims.

 

The impact of government limits on speech has been magnified by even greater forms of private censorship. For example, most news organizations have stopped showing images of Muhammad, though they seem to have no misgivings about caricatures of other religious figures. The most extreme such example was supplied by Yale University Press, which in 2009 published a book about the Danish cartoons titled “The Cartoons That Shook the World” — but cut all of the cartoons so as not to insult anyone.

The very right that laid the foundation for Western civilization is increasingly viewed as a nuisance, if not a threat. Whether speech is deemed imflammatory or hateful or discriminatory or simply false, society is denying speech rights in the name of tolerance, enforcing mutual respect through categorical censorship.

As in a troubled marriage, the West seems to be falling out of love with free speech. Unable to divorce ourselves from this defining right, we take refuge instead in an awkward and forced silence.

jturley@law.gwu.edu

Jonathan Turley is the Shapiro professor of public interest law at George Washington University.

Read more from Outlook:

Ten reasons the U.S. is no longer the land of the free

Friend us on Facebook and follow us on Twitter.

© The Washington Post Company

The Strid

Purely for entertainment purposes, and because I am totally horrified/fascinated by this place…

THE STRID

This is a place near Bolton Abbey in Yorkshire, England, UK where the river Wharfe narrows into a rocky area in the woods and becomes very, very dangerous and deep.  Cracked.com did an article featuring it as one of the ‘top 10 beautiful places in the world that want to kill you!’  Read on:

http://www.boltonabbey.com/whattodo/strid.htm

https://howesue.wordpress.com/the-strid/

 

‘The Coming Civil War Over General Computing’

I am posting what I think is the most important thing I have read in quite a while:

‘The Coming Civil War Over General Computing’ by Cory Doctorow

This talk was delivered at Google in August, and for The Long Now Foundationin July 2012. A transcript of the notes follows.

I gave a talk in late 2011 at 28C3 in Berlin called “The Coming War on General Purpose Computing

In a nutshell, its hypothesis was this:

• Computers and the Internet are everywhere and the world is increasingly made of them.

• We used to have separate categories of device: washing machines, VCRs, phones, cars, but now we just have computers in different cases. For example, modern cars are computers we put our bodies in and Boeing 747s are flying Solaris boxes, whereas hearing aids and pacemakers are computers we put in our body.

• This means that all of our sociopolitical problems in the future will have a computer inside them, too—and a would-be regulator saying stuff like this:

“Make it so that self-driving cars can’t be programmed to drag race”

“Make it so that bioscale 3D printers can’t make harmful organisms or restricted compounds”

Which is to say: “Make me a general-purpose computer that runs all programs except for one program that freaks me out.”

But there’s a problem. We don’t know how to make a computer that can run all the programs we can compile except for whichever one pisses off a regulator, or disrupts a business model, or abets a criminal.

The closest approximation we have for such a device is a computer with spyware on it— a computer that, if you do the wrong thing, can intercede and say, “I can’t let you do that, Dave.”

Such a a computer runs programs designed to be hidden from the owner of the device, and which the owner can’t override or kill. In other words: DRM. Digital Rights Managment.

These computers are a bad idea for two significant reasons. First, they won’t solve problems. Breaking DRM isn’t hard for bad guys. The copyright wars’ lesson is that DRM is always broken with near-immediacy.

DRM only works if the “I can’t let you do that, Dave” program stays a secret. Once the most sophisticated attackers in the world liberate that secret, it will be available to everyone else, too.

Second, DRM has inherently weak security, which thereby makes overall security weaker.

Certainty about what software is on your computer is fundamental to good computer security, and you can’t know if your computer’s software is secure unless you know what software it is running.

Designing “I can’t let you do that, Dave” into computers creates an enormous security vulnerability: anyone who hijacks that facility can do things to your computer that you can’t find out about.

Moreover, once a government thinks it has “solved” a problem with DRM—with all its inherent weaknesses—that creates a perverse incentive to make it illegal to tell people things that might undermine the DRM.

You know, things like how the DRM works. Or “here’s a flaw in the DRM which lets an attacker secretly watch through your webcam or listen through your mic.”

I’ve had a lot of feedback from various distinguished computer scientists, technologists, civil libertarians and security researchers after 28C3. Within those fields, there is a widespread consensus that, all other things being equal, computers are more secure and society is better served when owners of computers can control what software runs on them.

Let’s examine for a moment what that would mean.

Most computers today are fitted with Trusted Platform Module. This is a secure co-processor mounted on the motherboard. The specification of TPMs are published, and an industry body certifies compliance with those specifications. To the extent that the spec is good (and the industry body is diligent), it’s possible to be reasonably certain that you’ve got a real, functional, TPM in your computer that faithfully implements the spec.

How is the TPM secure? It contains secrets: cryptographic keys. But it’s also secure in that it’s designed to be tamper-evident. If you try to extract the keys from a TPM, or remove the TPM from a computer and replace it with a gimmicked one, it will be very obvious to the computer’s owner.

One threat to TPM is that a crook (or a government, police force or other adversary) might try to compromise your computer — tamper-evidence is what lets you know when your TPM has been fiddled with.

Another TPM threat-model is that a piece of malicious software will infect your computer

Now, once your computer is compromised this way, you could be in great trouble. All of the sensors attached to the computer—mic, camera, accelerometer, fingerprint reader, GPS—might be switched on without your knowledge. Off goes the data to the bad guys.

All the data on your computer (sensitive files, stored passwords and web history)? Off it goes to the bad guys—or erased.

All the keystrokes into your computer—your passwords!—might be logged. All the peripherals attached to your computer—printers, scanners, SCADA controllers, MRI machines, 3D printers— might be covertly operated or subtly altered.

Imagine if those “other peripherals” included cars or avionics. Or your optic nerve, your cochlea, the stumps of your legs.

When your computer boots up, the TPM can ask the bootloader for a signed hash of itself and verify that the signature on the hash comes from a trusted party. Once you trust the bootloader to faithfully perform its duties, you can ask it to check the signatures on the operating system, which, once verified, can check the signatures on the programs that run on it.

Ths ensures that you know which programs are running on your computer—and that any programs running in secret have managed the trick by leveraging a defect in the bootloader, operating system or other components, and not because a new defect has been inserted into your system to create a facility for hiding things from you.

This always reminds me of Descartes: he starts off by saying that he can’t tell what’s true and what’s not true, because he’s not sure if he really exists.

He finds a way of proving that he exists, and that he can trust his senses and his faculty for reason.

Having found a tiny nub of stable certainty on which to stand, he builds a scaffold of logic that he affixes to it, until he builds up an entire edifice.

Likewise, a TPM is a nub of stable certainty: if it’s there, it can reliably inform you about the code on your computer.

Now, you may find it weird to hear someone like me talking warmly about TPMs. After all, these are the technologies that make it possible to lock down phones, tablets, consoles and even some PCs so that they can’t run software of the owner’s choosing.

Jailbreaking” usually means finding some way to defeat a TPM or TPM-like technology. So why on earth would I want a TPM in my computer?

As with everything important, the devil is in the details.

Imagine for a moment two different ways of implementing a TPM:

1. Lockdown

Your TPM comes with a set of signing keys it trusts, and unless your bootloader is signed by a TPM-trusted party, you can’t run it. Moreover, since the bootloader determines which OS launches, youdon’t get to control the software in your machine.

2. Certainty

You tell your TPM which signing keys you trust—say, Ubuntu, EFF, ACLU and Wikileaks—and it tells you whether the bootloaders it can find on your disk have been signed by any of those parties. It can faithfully report the signature on any other bootloaders it finds, and it lets you make up your own damn mind about whether you want to trust any or all of the above.

Approximately speaking, these two scenarios correspond to the way that iOS and Android work: iOS only lets you run Apple-approved code; Android lets you tick a box to run any code you want. Critically, however, Android lacks the facility to do some crypto work on the software before boot-time and tell you whether the code you think you’re about to run is actually what you’re about to run.

It’s freedom, but not certainty.

In a world where the computers we’re discussing can see and hear you, where we insert our bodies into them, where they are surgically implanted into us, and where they fly our planes and drive our cars, certainty is a big deal.

This is why I like the idea of a TPM, assuming it is implemented in the “certainty” mode and not the “lockdown” mode.

If that’s not clear, think of it this way: a “war on general-purpose computing” is what happens when the control freaks in government and industry demand the ability to remotely control your computers

The defenders against that attack are also control freaks—like me—but they happen to believe that device-owners should have control over their computers

Both sides want control, but differ on which side should have control.

Control requires knowledge. If you want to be sure that songs can only moved onto an iPod, but not off of an iPod, the iPod needs to know that the instructions being given to it by the PC (to which it is tethered) are emanating from an Apple-approved iTunes. It needs to know they’re not from something that impersonates iTunes in order to get the iPod to give it access to those files.

If you want to be sure that my PVR won’t record the watch-once video-on-demand movie that I’ve just paid for, you need to be able to ensure that the tuner receiving the video will only talk to approved devices whose manufacturers have promised to honor “do-not-record” flags in the programmes.

If I want to be sure that you aren’t watching me through my webcam, I need to know what the drivers are and whether they honor the convention that the little green activity light is alwaysswitched on when my camera is running.

If I want to be sure that you aren’t capturing my passwords through my keyboard, I need to know that the OS isn’t lying when it says there aren’t any keyloggers on my system.

Whether you want to be free—or want to enslave—you need control. And for that, you need this knowledge.

That’s the coming war on general purpose computing. But now I want to investigate what happens if we win it.

We could face a interesting prospect. This I call the coming civil war over general purpose computing.

Let’s stipulate that a victory for the “freedom side” in the war on general purpose computing would result in computers that let their owners know what was running on them. Computers would faithfully report the hash and associated signatures for any bootloaders they found, control what was running on computers, and allow their owners to specify who was allowed to sign their bootloaders, operating systems, and so on.

There are two arguments that we can make for this:

1. Human rights

If your world is made of computers, then designing computers to override their owners’ decisions has significant human rights implications. Today we worry that the Iranian government might demand import controls on computers, so that only those capable of undetectable surveillance are operable within its borders. Tomorrow we might worry about whether the British government would demand that NHS-funded cochlear implants be designed to block reception of “extremist” language, to log and report it, or both.

2. Property rights

The doctrine of first sale is an important piece of consumer law. It says that once you buy something, it belongs to you, and you should have the freedom to do anything you want with it, even if that hurts the vendor’s income. Opponents of DRM like the slogan, “You bought it, you own it.”

Property rights are an incredibly powerful argument. This goes double in America, where strong property rights enforcement is seen as the foundation of all social remedies.

This goes triple for Silicon Valley, where you can’t swing a cat without hitting a libertarian who believes that the major — or only — legitimate function of a state is to enforce property rights and contracts around them.

Which is to say that if you want to win a nerd fight, property rights are a powerful weapon to have in your arsenal. And not just nerd fights!

That’s why copyfighters are so touchy about the term “Intellectual Property”. This synthetic, ideologically-loaded term was popularized in the 1970s as a replacement for “regulatory monopolies” or “creators’ monopolies” — because it’s a lot easier to get Congress to help you police your property than it is to get them to help enforce your monopoly.

Here is where the civil war part comes in.

Human rights and property rights both demand that computers not be designed for remote control by governments, corporations, or other outside institutions. Both ensure that owners be allowed to specify what software they’re going to run. To freely choose the nub of certainty from which they will suspend the scaffold of their computer’s security.

Remember that security is relative: you are secured from attacks on your ability to freely use your music if you can control your computing environment. This, however, erodes the music industry’s own security to charge you some kind of rent, on a use-by-use basis, for your purchased music.

If you get to choose the nub from which the scaffold will dangle, you get control and the power to secure yourself against attackers. If the the government, the RIAA or Monsanto chooses the nub, they get control and the power to secure themselves against you.

In this dilemma, we know what side we fall on. We agree that at the very least, owners should be allowed to know and control their computers.

But what about users?

Users of computers don’t always have the same interests as the owners of computers— and, increasingly, we will be users of computers that we don’t own.

Where you come down on conflicts between owners and users is going to be one of the most meaningful ideological questions in technology’s history. There’s no easy answer that I know about for guiding these decisions.

Let’s start with a total pro-owner position: “property maximalism”.

• If it’s my computer, I should have the absolute right to dictate the terms of use to anyone who wants to use it. If you don’t like it, find someone else’s computer to use.

How would that work in practice? Through some combination of an initialization routine, tamper evidence, law, and physical control. For example, when you turn on your computer for the first time, you initialize a good secret password, possibly signed by your private key.

Without that key, no-one is allowed to change the list of trusted parties from which your computer’s TPM will accept bootloaders. We could make it illegal to subvert this system for the purpose of booting an operating system that the device’s owner has not approved. Such as law would make spyware really illegal, even moreso than now, and would also ban the secret installation of DRM.

We could design the TPM so that if you remove it, or tamper with it, it’s really obvious — give it a fragile housing, for example, which is hard to replace after the time of manufacture, so it’s really obvious to a computer’s owner that someone has modified the device, possibly putting it in an unknown and untrustworthy state. We could even put a lock on the case.

I can see a lot of benefits to this, but there downsides, too.

Consider self-driving cars. There’s a lot of these around already, of course, designed by Google and others. It’s easy to understand, how, on the one hand, self-driving cars are an incredibly great development. We are terrible drivers, and cars kill the shit out of us. It’s the number 1 cause of death in America for people aged 5-34.

I’ve been hit by a car. I’ve cracked up a car. I’m willing to stipulate that humans have no business driving at all.

It’s also easy to understand how we might be nervous about people being able to homebrew their own car firmware. On one hand, we’d want the source to cars to be open because we’d want to subject it to wide scrutiny. On the other hand, it will be plausible to say, “Cars are safer if they use a locked bootloader that only trusts government-certified firmware”.

And now we’re back to whether you get to decide what yourcomputer is doing.

But there are two problems with this solution:

First, it won’t work. As the copyright wars have shown up, firmware locks aren’t very effective against dedicated attackers. People who want to spread mayhem with custom firmware will be able to just that.

What’s more, it’s not a good security approach: if vehicular security models depend on all the other vehicles being well-behaved and the unexpected never arising, we are dead meat.

Self-driving cars must be conservative in their approach to their own conduct, and liberal in their expectations of others’ conduct.

This is the same advice you get in your first day of driver’s ed, and it remains good advice even if the car is driving itself.

Second, it invites some pretty sticky parallels. Remember the “information superhighway”?

Say we try to secure our physical roads by demanding that the state (or a state-like entity) gets to certify the firmware of the devices that cruise its lanes. How would we articulate a policy addressing the devices on our (equally vital) metaphorical roads—with comparable firmware locks for PCs, phones, tablets, and other devices?

After all, the general-purpose network means that MRIs, space-ships, and air-traffic control systems share the “information superhighway” with game consoles, Arduino-linked fart machines, and dodgy voyeur cams sold by spammers from the Pearl River Delta.

And consider avionics and power-station automation.

This is a much trickier one. If the FAA mandates a certain firmware for 747s, it’s probably going to want those 747s designed so that it and it alone controls the signing keys for their bootloaders. Likewise, the Nuclear Regulatory Commission will want the final say on the firmware for the reactor piles.

This may be a problem for the same reason that a ban on modifying car firmware is: it establishes the idea that a good way to solve problems is to let “the authorities” control your software.

But it may be that airplanes and nukes are already so regulated that an additional layer of regulation wouldn’t leak out into other areas of daily life — nukes and planes are subject to an extraordinary amount of no-notice inspection and reporting requirements that are unique to their industries.

Second, there’s a bigger problem with “owner controls”: what about people who use computers, but don’t own them?

This is not a group of people that the IT industry has a lot of sympathy for, on the whole.

An enormous amount of energy has been devoted to stopping non-owning users from inadvertently breaking the computers they are using, downloading menu-bars, typing random crap they find on the Internet into the terminal, inserting malware-infected USB sticks, installing plugins or untrustworthy certificates, or punching holes in the network perimeter.

Energy is also spent stopping users from doing deliberately bad things, too. They install keyloggers and spyware to ensnare future users, misappropriate secrets, snoop on network traffic, break their machines and disable the firewalls.

There’s a symmetry here. DRM and its cousins are deployed by people who believe you can’t and shouldn’t be trusted to set policy on the computer you own. Likewise, IT systems are deployed by computer owners who believe that computer users can’t be trusted to set policy on the computers they use.

As a former sysadmin and CIO, I’m not going to pretend that users aren’t a challenge. But there are good reasons to treat users as having rights to set policy on computers they don’t own.

Let’s start with the business case.

When we demand freedom for owners, we do so for lots of reasons, but an important one is that computer programmers can’t anticipate all the contingencies that their code might run up against — that when the computer says yes, you might need to still say no.

This is the idea that owners possess local situational awareness that can’t be perfectly captured by a series of nested if/then statements.

It’s also where communist and libertarianis principles converge:

• Friedrich Hayek thought that expertise was a diffuse thing, and that you were more likely to find the situational awareness necessary for good decisionmaking very close to the decision itself — devolution gives better results that centralization.

• Karl Marx believed in the legitimacy of workers’ claims over their working environment, saying that the contribution of labor was just as important as the contibution of capital, and demanded that workers be treated as the rightful “owners” of their workplace, with the power to set policy.

For totally opposite reasons, they both believed that the people at the coalface should be given as much power as possible.

The death of mainframes was attended by an awful lot of concern over users and what they might do to the enterprise. In those days, users were even more constrained than they are today. They could only see the screens the mainframe let them see, and only undertake the operations the mainframe let them undertake.

When the PC and Visicalc and Lotus 1-2-3 appeared, employees risked termination by bringing those machines into the office— or by taking home office data to use with those machines.

Workers developed computing needs that couldn’t be met within the constraints set by the firm and its IT department, and didn’t think that the legitimacy of their needs would be recognized.

The standard responses would involve some combination of the following:

• Our regulatory compliance prohibits the thing that will help you do your job better.

• If you do your job that way, we won’t know if your results are correct.

• You only think you want to do that.

• It is impossible to make a computer do what you want it to do.

• Corporate policy prohibits this.

These may be true. But often they aren’t, and even when they are, they’re the kind of “truths” that we give bright young geeks millions of dollars in venture capital to falsify—even as middle-aged admin assistants get written up by HR for trying to do the same thing.

The personal computer arrived in the enterprise by the back door, over the objections of IT, without the knowledge of management, at the risk of censure and termination. Then it made the companies that fought it billions. Trillions.

Giving workers powerful, flexible tools was good for firms because people are generally smart and want to do their jobs well. They know stuff their bosses don’t know.

So, as an owner, you don’t want the devices you buy to be locked, because you might want to do something the designer didn’t anticipate.

And employees don’t want the devices they use all day locked, because they might want to do something useful that the IT dept didn’t anticipate.

This is the soul of Hayekism — we’re smarter at the edge than we are in the middle.

The business world pays a lot of lip service to Hayek’s 1940s ideas about free markets. But when it comes to freedom within the companies they run, they’re stuck a good 50 years earlier, mired in the ideology of Frederick Winslow Taylor and his “scientific management”. In this way of seeing things, workers are just an unreliable type of machine whose movements and actions should be scripted by an all-knowing management consultant, who would work with the equally-wise company bosses to determine the one true way to do your job. It’s about as “scientific” as trepanation or Myers-Briggs personality tests; it’s the ideology that let Toyota cream Detroit’s big three.

So, letting enterprise users do the stuff they think will allow them to make more money for their companies will sometimes make their companies more money.

That’s the business case for user rights. It’s a good one, but really I just wanted to get it out of the way so that I could get down to the real meat: Human rights.

This may seem a little weird on its face, but bear with me.

Earlier this year, I saw a talk by Hugh Herr, Director of the Biomechatronics group at The MIT Media Lab. Herr’s talks are electrifying. He starts out with a bunch of slides of cool prostheses: Legs and feet, hands and arms, and even a device that uses focused magnetism to suppress activity in the brains of people with severe, untreatable depression, to amazing effect.

Then he shows this slide of him climbing a mountain. He’s buff, he’s clinging to the rock like a gecko. And he doesn’t have any legs: just these cool mountain climbing prostheses. Herr looks at the audience from where he’s standing, and he says, “Oh yeah, didn’t I mention it? I don’t have any legs, I lost them to frostbite.”

He rolls up his trouser legs to show off these amazing robotic gams, and proceeds to run up and down the stage like a mountain goat.

The first question anyone asked was, “How much did they cost?”

He named a sum that would buy you a nice brownstone in central Manhattan or a terraced Victorian in zone one in London.

The second question asked was, “Well, who will be able to afford these?

To which Herr answered “Everyone. If you have to choose between a 40-year mortgage on a house and a 40-year mortgage on legs, you’re going to choose legs”

So it’s easy to consider the possibility that there are going to be people — potentially a lot of people — who are “users” of computers that they don’t own, and where those computers are part of their bodies.

Mmost of the tech world understands why you, as the owner of your cochlear implants, should be legally allowed to choose the firmware for them. After all, when you own a device that is surgically implanted in your skull, it makes a lot of sense that you have the freedom to change software vendors.

Maybe the company that made your implant has the very best signal processing algorithm right now, but if a competitor patents a superior algorithm next year, should you be doomed to inferior hearing for the rest of your life?

And what if the company that made your ears went bankrupt? What if sloppy or sneaky code let bad guys do bad things to your hearing?

These problems can only be overcome by the unambiguous right to change the software, even if the company that made your implants is still a going concern.

That will help owners. But what about users?

Consider some of the following scenarios:

• You are a minor child and your deeply religious parents pay for your cochlear implants, and ask for the software that makes it impossible for you to hear blasphemy.

• You are broke, and a commercial company wants to sell you ad-supported implants that listen in on your conversations and insert “discussions about the brands you love”.

• Your government is willing to install cochlear implants, but they will archive everything you hear and review it without your knowledge or consent.

Far-fetched? The Canadian border agency was just forced to abandon a plan to fill the nation’s airports with hidden high-sensitivity mics that were intended to record everyone’s conversations.

Will the Iranian government, or Chinese government, take advantage of this if they get the chance?

Speaking of Iran and China, there are plenty of human rights activists who believe that boot-locking is the start of a human rights disaster. It’s no secret that high-tech companies have been happy to build “lawful intercept” back-doors into their equipment to allow for warrantless, secret access to communications. As these backdoors are now standard, the capability is still there even if your countrydoesn’t want it.

In Greece, there is no legal requirement for lawful intercept on telcoms equipment.

During the 2004/5 Olympic bidding process, an unknown person or agency switched on the dormant capability, harvested an unknown quantity of private communications from the highest level, and switched it off again

Surveillance in the middle of the network is nowhere near as interesting as surveillance at the edge. As the ghosts of Messrs Hayek and Marx will tell you, there’s a lot of interesting stuff happening at the coal-face that never makes it back to the central office.

Even “democratic” governments know this. That’s why the Bavarian government was illegally installing the “bundestrojan” — literally, state-trojan — on peoples’ computers, gaining access to their files and keystrokes and much else besides. So it’s a safe bet that the totalitarian governments will happily take advantage of boot-locking and move the surveillance right into the box.

You may not import a computer into Iran unless you limit its trust-model so that it only boots up operating systems with lawful intercept backdoors built into it.

Now, with an owner-controls model, the first person to use a machine gets to initialize the list of trusted keys and then lock it with a secret or other authorization token. What this means is that the state customs authority must initialize each machine before it passes into the country.

Maybe you’ll be able to do something to override the trust model. But by design, such a system will be heavily tamper-evident, meaning that a secret policeman or informant can tell at a glance whether you’ve locked the state out of your computer. And it’s not just repressive states, of course, who will be interested in this.

Remember that there are four major customers for the existing censorware/spyware/lockware industry: repressive governments, large corporations, schools, and paranoid parents.

The technical needs of helicopter mums, school systems and enterprises are convergent with those of the governments of Syria and China. They may not share ideological ends, but they have awfully similar technical means to those ends.

We are very forgiving of these institutions as they pursue their ends; you can do almost anything if you’re protecting shareholders or children.

For example, remember the widespread indignation, from all sides, when it was revealed that some companies were requiring prospective employees to hand over their Facebook login credentials as a condition of employment?

These employers argued that they needed to review your lists of friends, and what you said to them in private, before determining whether you were suitable for employment.

Facebook checks are the workplace urine test of the 21st century. They’re a means of ensuring that your private life doesn’t have any unsavoury secrets lurking in it, secrets that might compromise your work.

The nation didn’t buy this. From senate hearings to newspaper editorials, the country rose up against the practice.

But no one seems to mind that many employers routinely insert their own intermediate keys into their employees’ devices — phones, tablets and computers. This allows them to spy on your Internet traffic, even when it is “secure”, with a lock showing in the browser.

It gives your employer access to any sensitive site you access on the job, from your union’s message board to your bank to Gmail to your HMO or doctor’s private patient repository. And, of course, to everything on your Facebook page.

There’s wide consensus that this is OK, because the laptop, phone and tablet your employer issues to you are not your property. They are company property.

And yet, the reason employers give us these mobile devices is because there is no longer any meaningful distinction between work and home.

Corporate sociologists who study the way that we use our devices find time and again that employees are not capable of maintaining strict divisions between “work” and “personal” accounts and devices.

America is the land of the 55-hour work-week, a country where few professionals take any meaningful vacation time, and when they do get away for a day or two, take their work-issued devices with them.

Even in traditional workplaces, we recognized human rights. We don’t put cameras in the toilets to curtail employee theft. If your spouse came by the office on your lunch break and the two of you went into the parking lot so that she or he could tell you that the doctor says the cancer is terminal, you’d be aghast and furious to discover that your employer had been spying on you with a hidden mic.

But if you used your company laptop to access Facebook on your lunchbreak, wherein your spouse conveys to you that the cancer is terminal, you’re supposed to be OK with the fact that your employer has been running a man-in-the-middle attack on your machine and now knows the most intimate details of your life.

There are plenty of instances in which rich and powerful people — not just workers and children and prisoners — will be users instead of owners.

Every car-rental agency would love to be able to lo-jack the cars they rent to you; remember, an automobile is just a computer you put your body into. They’d love to log all the places you drive to for “marketing” purposes and analytics.

There’s money to be made in finagling the firmware on the rental-car’s GPS to ensure that your routes always take you past certain billboards or fast-food restaurants.

But in general, the poorer and younger you are, the more likely you are to be a tenant farmer in some feudal lord’s computational lands. The poorer and younger you are, the more likely it’ll be that your legs will cease to walk if you get behind on payments.

What this means is that any thug who buys your debts from a payday lender could literally — and legally — threaten to take your legs (or eyes, or ears, or arms, or insulin, or pacemaker) away if you failed to come up with the next installment.

Earlier, I discussed how an owner override would work. It would involve some combination of physical access-control and tamper-evidence, designed to give owners of computers the power to know and control what bootloader and OS was running on their machine.

How would a user-override work? An effective user-override would have to leave the underlying computer intact, so that when the owner took it back, she could be sure that it was in the state she believed it to be in. In other words, we need to protect users from owners and owners from users.

Here’s one model for that:

Imagine that there is a bootloader that can reliably and accurately report on the kernels and OSes it finds on the drive. This is the prerequisite for state/corporate-controlled systems, owner-controlled systems, and user-controlled systems.

Now, give the bootloader the power to suspend any running OS to disk, encrypting all its threads and parking them, and the power to select another OS from the network or an external drive.

Say I walk into an Internet cafe, and there’s an OS running that I can verify. It has a lawful interception back-door for the police, storing all my keystrokes, files and screens in an encrypted blob which the state can decrypt.

I’m an attorney, doctor, corporate executive, or merely a human who doesn’t like the idea of his private stuff being available to anyone who is friends with a dirty cop.

So, at this point, I give the three-finger salute with the F-keys. This drops the computer into a minimal bootloader shell, one that invites me to give the net-address of an alternative OS, or to insert my own thumb-drive and boot into an operating system there instead.

The cafe owner’s OS is parked and I can’t see inside it. But the bootloader can assure me that it is dormant and not spying on me as my OS fires up. When it’s done, all my working files are trashed, and the minimal bootloader confirms it.

This keeps the computer’s owner from spying on me, and keeps me from leaving malware on the computer to attack its owner.

There will be technological means of subverting this, but there is a world of difference between starting from a design spec that aims to protect users from owners (and vice-versa) than one that says that users must always be vulnerable to owners’ dictates.

Fundamentally, this is the difference between freedom and openness — between free software and open source.

Now, human rights and property rights often come into conflict with one another. For example, landlords aren’t allowed to enter your home without adequate notice. In many places, hotels can’t throw you out if you overstay your reservation, provided that you pay the rack-rate for the rooms — that’s why you often see these posted on the back of the room-door

Reposession of leased goods — cars, for example — are limited by procedures that require notice and the opportunity to rebut claims of delinquent payments.

When these laws are “streamlined” to make them easier for property holders, we often see human rights abuses. Consider robo-signing eviction mills, which used fraudulent declarations to evict homeowners who were up to date on their mortgages—and even some who didn’t have mortgages.

The potential for abuse in a world made of computers is much greater: your car drives itself to the repo yard. Your high-rise apartment building switches off its elevators and climate systems, stranding thousands of people until a disputed license payment is settled.

Sounds fanciful? This has already happened with multi-level parking garages.

Back in 2006, a 314-car Robotic Parking model RPS1000 garage in Hoboken, New Jersey, took all the cars in its guts hostage, locking down the software until the garage’s owners paid a licensing bill that they disputed.

They had to pay it, even as they maintained that they didn’t owe anything. What the hell else were they going to do?

And what will

you

do when your dispute with a vendor means that you go blind, or deaf, or lose the ability to walk, or become suicidally depressed?

The negotiating leverage that accrues to owners over users is total and terrifying.

Users will be strongly incentivized to settle quickly, rather than face the dreadful penalties that could be visited on them in the event of dispute. And when the owner of the device is the state or a state-sized corporate actor, the potential for human rights abuses skyrockets.

This is not to say that owner override is an unmitigated evil. Think of smart meters that can override your thermostat at peak loads.

Such meters allow us to switch off coal and other dirty power sources that can be varied up at peak times.

But they work best if users — homeowners who have allowed the power-company to install a smart-meter — can’t override the meters. What happens when griefers, crooks, or governments trying to quell popular rebellion use this to turn heat off during a hundred year storm? Or to crank heat to maximum during a heat-wave?

The HVAC in your house can hold the power of life and death over you — do we really want it designed to allow remote parties to do stuff with it even if you disagree?

The question is simple. Once we create a design norm of devices that users can’t override, how far will that creep?

Especially risky would be the use of owner override to offer payday loan-style services to vulnerable people: Can’t afford artificial eyes for your kids? We’ll subsidize them if you let us redirect their focus to sponsored toys and sugar-snacks at the store.

Foreclosing on owner override, however, has its own downside. It probably means that there will be poor people who will not be offered some technology at all.

If I can lo-jack your legs, I can lease them to you with the confidence of my power to repo them if you default on payments. If I can’t, I may not lease you legs unless you’ve got a lot of money to begin with.

But if your legs can decide to walk to the repo-depot without your consent, you will be totally screwed the day that muggers, rapists, griefers or the secret police figure out how to hijack that facility.

It gets even more complicated, too, because you are the “user” of many systems in the most transitory ways: subway turnstiles, elevators, the blood-pressure cuff at the doctor’s office, public buses or airplanes. It’s going to be hard to figure out how to create “user overrides” that aren’t nonsensical. We can start, though, by saying a “user” is someone who is the

sole

user of a device for a certain amount of time.

This isn’t a problem I know how to solve. Unlike the War on General Purpose Computers, the Civil War over them presents a series of conundra without (to me) any obvious solutions.

These problems are a way off, and they only arise if we win the war over general purpose computing first

But come victory day, when we start planning the constitutional congress for a world where regulating computers is acknowledged as the wrong way to solve problems, let’s not paper over the division between property rights and human rights.

This is the sort of division that, while it festers, puts the most vulnerable people in our society in harm’s way. Agreeing to disagree on this one isn’t good enough. We need to start thinking now about the principles we’ll apply when the day comes.

If we don’t start now, it’ll be too late.

Link to original:  http://boingboing.net/2012/08/23/civilwar.html