Writing Workshop: Memoir

Life/Story is a workshop for writers who are working on a memoir or have a project in mind. Five writers get together with Ansary for two-hour meetings to discuss their work. Each writer submits a piece by email beforehand.  Writers can share excerpts from a work-in-progress, stand-alone narrative pieces, outlines, and ruminations on the structure of their intended work. Whatever they submit gets workshopped at the next session:  that is, the group discusses it, offers suggestions and analysis, and gives productive feedback.  The atmosphere is both supportive and intensive.  Memoir is the intersection between memory and story, and in this workshop the emphasis is equally on story–the “what-happened”–and on writing–the “how-it’s-told”. The workshop costs $350 for six meetings over twelve weeks.

Life Story logo for website

Life/Story is founded on the premise that every life not only teems with stories but is a story.  No one really knows what their story is until they look for it;  the trick is not merely to remember what happened but to find the story (or stories) in it.  A memoir can be about sensational experiences that depart radically from the byways of normal life,  but it doesn’t have to be.  Anything you might find in a novel, you might find in a life–and therefore in a memoir.  The riveting Russian novel Oblomov is about a guy who never gets out of bed.  The novel Mrs Dalloway is about a woman who takes a walk in her garden.  It’s safe to say that most people have as much to write about as Oblomov or Mrs. Dalloway.

For further thoughts on the art of memoir, read Ansary’s interview with David Henry Sterry.

For information and schedule, contact: 

tamim@mirtamimansary.com

 or text

(415) 359-7988

 

Yanina 6reducedAnsary, a long-time author, editor, and teacher, has written three book-length memoirs: West of Kabul, East of New York is his bestselling literary memoir about his own bicultural life.  The Other Side of the Sky, an as-told-to memoir about land-mine victim Farah Ahmadi, made the New York Times extended bestseller list .   He has just published a third memoir, Road Trips, which portrays the story-like arc of one-whole-life by recounting three iconic journeys. In 2008-09, he developed a workshop to help young Afghan-Americans tell their stories, and it resulted in an anthology called Snapshots: This Afghan-American Life, featuring stories by 15 young writers. For 22 years, as leader of the San Francisco Writers Workshop, he worked with dozens of writers crafting memoirs, many of which ended up as successful published books, such as David Sterry’s bestelling Chicken: Self Portrait of a Young Man for Rent and Michael Chorost’s PEN-award winning  Rebuilt: How Becoming Part Computer Made Me More Human. Ansary has also taught  memoir writing workshops at Reed College and at Taheima Resort in Puerto Vallerta .

 

 

Public Good

 

 

The Public Good

 

Worker at a factory learning that his plant is being shut down and his job is gone.

 

 I worry that the idea of a common good is declining. Suddenly, for example, that dour intellectual battleaxe of the 1950s, Ayn Rand, has found an enthusiastic new audience among young adults. This is the same Ayn Rand who identified self-interest as the highest good and preached that caring about others was a fake value invented by the contemptible weak as a means of hobbling the heroic strong.

Somehow, her ideas have acquired a patina of cool.

I was fulminating about all this the other day, sounding, I’m sure like the crusty old codgers of my youth. Picture skinny old men shaking their canes and yelling in high-pitched, cracked voices, “Young people today! No respect!”

My wife heard my fulminations and took me to task. “Young people are no more selfish than they ever were,” she said. “In fact, less so. Just look at websites like Kickstarter and Kiva and Indigogo, and how popular they are.” For anyone who doesn’t know, these websites let anyone seeking money for a cause connect up with people who want to donate (or loan) money to their exact cause. And it’s working. People really are getting funding for all kinds of good works, and a lot of it is coming from the young; maybe most of it.

But I never disputed the idealism. I’m not saying young people are getting more selfish. I know lots of young adults who have compassionate feelings and want to reach out. They just want to choose who they reach out to. They want their giving to reflect who they are. Helping others becomes, to some extent, an act of self-definition, self-realization. Self-expression.

Which is fine. But I’m just saying, the social compact of old offered a different proposition. It proposed that individuals relinquish their idea of themselves as the center of the universe and see themselves as smaller parts of a greater whole, a society whose collective promise was that no one would be left (entirely) behind.

At the leftist end of the axis, that compact was expressed as socialism. And when I was young, though “communist” might have been a curse word to older folks in mainstream America, calling someone a “socialist” was no worse than calling them “European.” Many young people cheerfully embraced “socialist” as a label. They saw no stigma in it. In many quarters, “socialist” had a positive connotation. It meant you believed it was right to care about the well-being of the whole society and that you had a duty to contribute to that well-being. Giving money to a beggar was fine, but it was merely charity. Fighting for a social program that would help thousands was on a higher plane, more noble, and that’s what being a socialist was all about, that struggle.

That’s the thing that’s vanishing, seems to me. In its place, rising up like swamp gas, is a notion that the whole will take care of itself if only every individual looks out for his or her own interest vigorously and competitively, giving now quarter and asking for no help. Seeking the well-being of one’s own individual self is what has glamour now.

I overheard a conversation between two twenty-somethings in a bookstore one day about an election. The guy was telling the woman that he was not going to vote for a certain candidate.

Why not? she asked. After all, the candidate had the right stand on many issues; and she went on to list positions of which she and her guy both evidently approved.

Yes, the guy admitted, “but on the other hand…” And he cited a list of issues on which the candidate was at odds with him. In fact, declared this fellow, he had decided not to vote at all, because: “There just isn’t any candidate out there who really represents ME.”

I thought about his expectation. I thought about the implication that the only candidate worth voting for is one whose preferences and positions exactly match your own. At some level (polls tell us) that is what many voters look for in a candidate now—a surrogate self: someone who “represents” them by looking, sounding, talking and thinking exactly as they do.

I have to say, I’m not one of those voters. A candidate who held exactly the same positions and preferences as me would be ineffective. And a candidate exactly like me would be a disaster. I’m good at some things, but I know I’d be no good at being president. Or vice president. Or the Senator from California. Or dogcatcher of a small town. I’m looking for someone whose positions and approaches I can approve of in the main, and who also, in my judgment, would be able to work with enough different people to effect some worthwhile changes and who could take decisive but judicious action when needed.

To me, if you’re looking for a candidate exactly like yourself, you’re looking at voting as a form of self expression.

What strikes me is the way this development in politics mirrors a modern trend driven by technology. This goes back to the algorithms that power all search engines. These identify the preferences of the person searching and offer them (the algorithm’s best guess of) what they’re looking for and also of what else they might like.

As some hi-tech professional once put it (I forget who or where) “each person who visits Amazon.com enters a bookstore visited by no other person on Earth.”

That’s because anyone with a history of purchasing books on Amazon is offered a range of books that have been selected by the search engine based on that consumer’s earlier choices. The same is true of Netflix. Pandora, Youtube, et al.

The same is true of Google: every single person who seeks information from Google gets a different set of options. When I Google the term “Egypt”, I get lots of information about the Muslim Brotherhood, the Egyptian elections, the Arab Spring, etc. When a friend of mine Goggles the same term on her computer, she gets a list of websites about mummies, the temples at Luxor, airlines offering bargain flights to Cairo, etc.

But that’s not the worst of it. Another friend mine has enthusiastically embraced the idea of “seasteading”—of building floating cities on the ocean and declaring them sovereign countries. He tells me the idea is catching on wildly; there’s a virtual prairie fire of enthusiasm about it in the country. “Just Google seasteading,” he urged.

He said this because when he Goggles the term he gets endless lists of blogs that rave about seasteading. When I Google that term, I get sites on which people are ranting about how naive, dopey, and possibly unethical the idea is.

Here’s the creepy thing. I got these sites the first time I Googled “seasteading.” The list Google gave me wasn’t derived from choices I had previously made about this term Somehow, Google’s algorithm had an opinion about my opinion of seasteading. It turns out that Google’s algorithm has its opinion about my opinion of any topic I might look up, every topic I might ever look up.

What does this mean? To me it means that we’re slowly losing the capacity to see what the universe looks like from any place except where we are standing. As people do less and less live interacting with communities of other people in problem-solving settings—in offices, schools, town hall meetings, union sessions, conferences, and so on—and let their interactions with the world be mediated increasingly by search and information technology and its algorithms, this trend will speed up. Every person will in fact be the center of the universe.

Politically, my whole life I have been committed to the notion of a public good and to the idea that each of us has a duty to contribute to it. But that enterprise depends on a common vision that all the members of a society can enter into. Politics is partly about building that common vision. I fear for the prospects of such a politics in a world from which the very idea of a public good has vanished and nothing remains but private interests duking it out in a competition of all against all.

 [Return to Home Page]

Not Your Founding Father’s Democracy

 

 

 

Not Your (Founding) Father’s Democracy

 

 

 

Another gut-wrenching presidential campaign season screams into full gear. What a process! Why on Earth did the founders ever craft such a system?

Actually, they didn’t. The process we are in the middle of bears little resemblance to the one that put George Washington in office. For better or for worse, huge innovations have entered the system. Here (as I see it) are the ten of hte biggest changes. I wrote this column two elections ago, so I don’t include here the impact of social media and the Internet in general. That’s material for a whole other column to come.

1. Today we have a popular vote.

In the first 34 years of our republic (spanning the terms of five presidents) we had no popular vote to speak of. Then as now, presidents were chosen by the electoral college, as mandated by the constitution, but at first, the electors in many states were simply appointed by state lawmakers. So, in California, for example, the state assembly would assemble behind closed doors and pick some delegation to send to Washington, and that delegation would decide who Californians wanted for president. Gradually, however, states came around to letting voters pick electors, the system we have today. The first time enough states did this to make a popular vote even worth recording was 1824. (A total of 356,035 ballots were cast for president that year.)

2. Today we have political parties.

The constitution never mentions political parties. The founders thought they would be divisive and hoped to prevent any from forming. In their vision, the nation’s top leader would be chosen from amongst eminent personalities who had proven themselves above all special interests. The process would simply entail selecting the most capable of all the available sages. The founders thought such a lineup existed and always would.

They were naïve, of course. Today, no one can seriously run for president unless they belong to a party; and political parties by nature represent subsets of the nation, not the nation as a whole. A presidential election today represents a struggle between conglomerations of interest groups—rural vs. urban, oil interests vs. environment, and so on.

3. Today we have presidential campaigns.

This wasn’t part of the original plan. The founders considered “vote-chasing” undignified. Of course, supporters of early presidential hopefuls did write diatribes and polemics on behalf of their heroes, but George Washington held no campaign rallies. That I Like Tom button you’ve been hoarding probably references Tom Arnold, not Thomas Jefferson. Vote-chasing did not come into full bloom until the election of 1840. Not coincidentally, that was the first year a nationwide popular vote existed.

4. It now takes money to win the presidency.

Washington spent virtually nothing to become president. The next few candidates incurred only small costs—small enough to handle out of their own and their friends’ pockets. Really big money didn’t pour into presidential campaigns until after the Civil War. A crucial turning point came in 1896, when William McKinley’s campaign manager basically invented systematic fundraising. That year, McKinley raised and spent about seven million dollars to his opponent’s piddly $650,000. This year, according to the Financial Times of London, the two presidential candidates have spent over $1.2 billion dollars between them. Whatever else a presidential election may be, it’s now a contest between fundraising honchos.

5. Persuasive techniques developed for business are used in politics now.

In the distant past, advertisers were in charge of herding existing demand toward their client’s products. The advent of television, and the rise of “Madison Avenue,” brought a subtle change. Now advertising professionals took on the task of creating demand. In the 1950s, advertisers made the heady discovery that they could actually do this—motivate people to buy things they did not start out wanting. Political campaign professionals were quick to draw on the expertise of Madison Avenue to create, shape, mold, and herd public opinion. This tends to blur the boundary between what we think and what political professionals want us to think–whatever else it may be, a presidential election is now a contest between marketing teams.

 

 [column]

6. Today, candidates come to us in “bite-sized” portions.

It’s part of the effect of advertising in politics, but I think this one deserves separate mention. In 1952, Dwight D. Eisenhower’s campaign hired ad whiz Rosser Reeves straight out of Madison Avenue. Reeves had invented the slogan

“melts in your mouth, not in your hand” for M&M (one of the century’s 15 greatest ad slogans according to many advertising experts), and Eisenhower’s team thought Reeves might do for Ike what he had done for candy. Reeves happened upon a seminal idea called “spot advertising.” Reeves saw that moments of time were for sale between hit shows on television. He could buy those “spots” for small bucks and thereby reach the huge audiences built at a cost of millions by the big companies that sponsored the shows. The only catch: he had to deliver a message in 30 seconds or less. Rosser made a series of “spot ads” for Ike that compressed a town-hall meeting feeling into a 30-second clip. Today’s presidential campaigns consist largely of “spot ads,” “sound bites,” and the like. 

7. Today the candidates interact with voters through mass media. 

About sixty years ago, technology made it possible for candidates to speak to millions at one time through radio and television. Frank Merriam, who ran for governor of California in 1934, was the first to really exploit the political potential of mass media—he used radio advertising (and fake newsreels) to squash populist Upton Sinclair. 

Today, the bulk of the money raised by presidential candidates goes into mass media buys. One consequence of addressing millions at once is that candidates have to deliver least-common-denominator messages. However… 

8. Mass media appeals are now filtered through “narrowcasting.” 

Mass media still rules, but so many forms of media now exist that campaigns can deliver tailored messages to different target audiences. Viewers experience these ads as mass appeals—as what the candidates is broadcasting to everybody. Actually, different demographic segments see slightly different messages. What’s more, the direct-mail industry has databases from which it can assemble lists of individuals fitting particular profiles based on the products they buy, the television shows they watch, the work they do, etc. By mail and phone, therefore, particularized messages can be delivered to each individual appropriate to his or her opinions and leanings. The Internet will undoubtedly promote this trend. 

9. Polling has come to permeate the election process. 

Scientific polling was invented in the 1920s as an instrument of business, but it didn’t enter politics until the late 1930s, when Franklin D. Roosevelt began using a private polling service. At that point, polling was still a one-way process: the president would give a speech and then see how it went over. 

In the election of 1960, however, the Kennedy campaign began running polls in a given area before a candidate’s appearances and use the results to write the speeches he would give there—which changes the function of polling. By 1976, Jimmy Carter’s key campaign advisors included a pollster, Pat Caddell. Reagan followed suit and brought his pollster into the White House to help him govern. All these precedents have endured. 

Meanwhile, pollsters have refined their techniques through the use of “focus groups.” These are small groups of people selected to mirror a particular demographic profile. Campaign professionals sit down for in-depth discussions with a focus group to get behind mere numbers and root out people’s underlying emotions and unconscious leanings. In 1984, for example, focus group research helped Mondale discover that Gary Hart’s supporters felt uneasy about Hart’s ability to handle an international crisis. Ads based on that research stopped Hart’s momentum. 

Polling enables candidates to tell the voters what they want to hear. As a result, voter cannot tell what the candidates really think. Yet the opinions politicians glean from voters may be the very ones their own campaigns have planted out there, through advertising. In combination, then, polling and opinion management create a hall of mirrors in which no one knows what anyone really thinks. 

10. Today political consultants run presidential campaigns. 

Once upon a time, people who wanted to be president gathered a group of supporters and molded them into a staff of loyalists who did the tasks needed to get their man elected. 

Then in the early 1930s, a husband-and-wife team in California, Clem Whitaker and Leone Baxter, set up the first political consulting firm. They offered clients a complete package of campaign services, from developing strategy to writing speeches to catering fundraising dinners—in short, they turned campaigning into a paid service separable from any particular candidate or cause, just like lawyering or advertising.

Political consultants now dominate elections at every level. At this point, they still remain vaguely associated with one side or the other of the political spectrum, but when the fiercest Democratic hired-gun James Carville can marry his fiercest Republican counterpart Mary Matalin, you know that electing a candidate exists today as a content-free abstraction, a craft in itself, independent of any particular worldly goal. 

And yes, there is a Society of Political Consultants, and yes, they are holding an awards banquet in 2005 to hand out “Pollies” for the best political consulting of the past year. Whatever else it might be, a presidential election is now a race to win a Pollie.

[Return to Home Page]

The Case for Liberal Arts

 

  

Higher Education

 

The Case for Liberal Arts  

 

 

It’s too soon to write obituaries for the classic, residential, liberal arts college. Applications at my own alma mater, Reed College, are up. Ditto for Haverford, Williams, and all their ilk.

But why would anyone pay for an education that provides no concrete job skills?

Seven arts

The answer traces back to the first European universities. Those universities had no founders but formed spontaneously because scholars gravitated to places with books and students gravitated to places with scholars. The University of Paris, for example, grew out of the community of learners around Notre Dame cathedral.

Early on, this first university organized learning into four colleges. Every student had to first get through the College of Art. Those who did were titled “beginners” or, in Latin, “baccalaureates”—whence comes our modern-day Bachelor (of Arts).

At that gateway college, students studied seven “arts”: grammar, rhetoric, logic, arithmetic, geometry, astronomy, and music. In short, they learned how to think, write, speak, argue, and calculate. Only then were they allowed to pursue advanced studies at the College of Theology, Law, or Medicine.

Mere baccalaureates could get positions in the church or secular jobs as “clerks” and “notaries,” so the College of Art did have vocational implications, but only as a by-product. Its core purpose was to turn raw noodles into “well-educated persons.”

That mission remains.

The well-educated person

A liberal arts education proposes to give students a survey from up high of the whole landscape of human knowledge. Then, B.A. in hand, students can make their way to the grubby real-world corner that suits them best. They’ll make better choices, goes the thinking, once they’ve seen the context. And their work will better serve the common good if they know how it fits in with the human endeavor.

Ultimately, then, the driving ideal of a liberal arts education is to forge well-educated persons. This presumes that “well-educated” is a coherent quality, quite apart from “good at this” and “good at that.”

What this quality is and how it’s formed remains always in play. In America, however, until about 30 years ago, the liberal arts curriculum had a definite three-part form:

1. First, a core course that gave students a big picture of where civilization had been and where it was going.

2. Second, distribution requirements led students to take courses in disparate disciplines and thus experience different modes of thinking and the range of human thought.

3. Third, a major immersed students in deeper study of one field.

The last two planks remain intact, but student activists of my generation dented the first one. We charged that core humanities courses really boiled down to reverent study of books written by “dead white European males,” ignoring the contributions of women, Blacks, Latinos, Asians and others; and besides, we said, students being so varied, why should we all have to squeeze through the same portal?

Many colleges dropped the core course idea. But a few (Reed, for example) never abandoned the ancient doctrines. And at least one college, St. Johns, aggressively clung to a curriculum built almost entirely around “great books” (of Western Civilization.)

Now, however, many colleges are painstakingly reconstituting core humanities courses, often with a global cast. As it turns out, “we’re-all-different” is not really an argument for abandoning a core course. It’s the strongest reason to have one!

Elitism?

Still, the question remains: what good does it do any individual to be “well-educated”? Is this not an elitist concept analogous to the aristocratic notion of “gentleman?”

I’ll quote an answer someone gave me recently. Alicia Neumann earned her B.A. from Occidental, a traditional liberal arts college in California. Years later, she went back to school and got a Master’s in Public Health. Now she works in her new field and doing well. But what makes her good at her job, she confided, is mostly stuff she learned at Occidental.

“Just the ability to give and get information clearly!” she declared. “So much of my job consists of writing—emails, reports, letters! Or attending meetings, giving presentations. The ability to get my point across is major. I look around at some of my colleagues who skipped the liberal arts and they’re fuzzier at communication. It’s an obstacle in their work. It makes them less efficient.

“Then, there’s the ability to synthesize. A lot of what I did in college was collect information from many sources, discern patterns, and put it together to make a new point. Back then I did it with literature, but the underlying skill is applicable to everything I do now.

“One of my friends is a lobbyist in Washington, and she’s just zooming up through that world. Why? Because she can write and think.”

This is what a liberal arts education is about, just as it was 900 years ago at the College of Art in Paris. And this is why places like Reed and Occidental keep flourishing: they open pathways to leadership and power in America, not just because of whom one meets at such places, but because of what one learns there.

Yes, a good liberal arts education tends to produce America’s elite, but that’s not a reason to mark it down. It’s a reason to keep it open to students from all walks of life.

 

[Return to Home]

 

 

Romney’s Pranks

 

 

Romney’s Pranks

 

When I was a freshman at Carleton College in the mid-sixties, I was one of three guys on campus with long hair. Lots of people made fun of us, the usual gibe being: “Are you a boy or a girl?” One day, I went into a dorm called Musser, reputed to be “the jock’s dorm.” Ten or twelve guys saw me in the hall and began to jeer. Sensing trouble, I tried to push past them, but one said “Let’s cut off his hair.”  I knew this was bad. I tried to bolt, but the pack caught me and swiftly brought me down. Suddenly one of them had a pair of scissors. They all tried to hold me down as the one boy worked the scissors, but I kept wriggling and struggling, so he kept stabbing the scissors into my skull accidentally. Probably, I should have stopped struggling.  Just-let-it-happen would have been the safe move.  But I was freaked out beyond reason at that point, simply flailing like an animal. Fortunately, just then, my friend Rich Libby wandered into Musser, saw what was happening, and waded in. Rich was a wrestler, a burly guy, and between us we managed to get me out of that place. It took me a while to get unjangled though.  And I never forgot that this thing had happened.

A couple of years ago, I went back to Carleton for a 40th reunion. My wife and I were staying in a fancy new dorm. Late that first night, some other alum stumbled in and took the room next to ours. The next day, I ran into this fellow in the hall and we walked to the dining room together. He looked familiar, and his name rang a bell, but I couldn’t place him till we got to talking about college days and he told me he had lived in Musser. Then it all snapped clear: he was part of that pack that tried to cut my hair: the ringleader in fact. I could tell he didn’t remember me, and he didn’t remember the episode. For him, it had not been memorable.

I thought about that event recently, when the story came out about Romney joining a pack of boys in college to cut some guy’s hair. He said he had no memory of the event. He said it was just a college prank. His campaign said people should not be held accountable for pranks they may have committed in college.

Pranks. I thought back to those guys attacking me with scissors in Musser Hall, and “prank” is not how I would describe what they were doing. What I felt from them was not humor but hatred. Not for me personally, to be sure, but for something I apparently represented, because I had long hair a year or two before long hair became a commonplace for guys.

Short-sheeting a pal—that, to me, is a prank. Forming up as a gang going after some guy with a pair of scissors, even if only to cut his hair—that’s a hate crime. As hate crimes go, it’s a trivial one, but let’s be clear on the essential character of the act. And when I thought about the guy I met at the reunion, and when I heard about Romney’s so-called prank in his college, I thought: “What kind of a guy would do a thing like that?”

Romney’s campaign has made much of the trivial nature of the prank. It was long ago, the American people have real issues, big issues, to worry about, why dredge up some moment of lighthearted merry-making from long ago? But I’m thinking it goes back to the question: “What kind of a guy would do a thing like that?”

“What-kind-of-guy” is, I think, a legitimate question to ask, even all these years later, because Romney’s career after college raises the same question for me—and suggests the same answer. What kind of guy ? A bully without much obvious capacity for empathy. As head of a private equity firm called Bain, Romney practiced what he calls “creative destruction.” He paints a picture of himself as a disciplined businessman, strengthening the overall economy by whipping inefficient companies into shape and culling those too hopelessly flawed to deserve survival. He would have us see him as analogous to those guys who buy decrepit buildings, fix them up, and sell them for a profit. In fact, Romney’s formula was quite the opposite from that of the house-flippers. What he (and Bain) often did, it seems, was to buy firms in the pink of health and bleed them dry.

Pete Kotz, writing for the Seattle Weekly this April, chronicles how, in the 1990s, as head of Bain, Romney bought a highly successful company called Georgetown Steel, which employed 750 people. This company’s workers had great benefits including a handsome profit-sharing plan that kept them loyal. Based on the company’s productivity, Bain borrowed millions of dollars and paid it out as dividends to Bain investors and as fees to Romney himself and to the Bain management team he installed. These payments required that costs be cut to balance the books, so the new team eliminated profit sharing, cut down on maintenance, and stopped upgrading equipment. Four years later Georgetown went broke, and the 750 workers lost their jobs; Bain however managed to unload the company and its debt, and the bankruptcy did not hurt Romney and his investors at all, because they had already gotten their money early. Seen purely as an investment, Georgetown Steel was a success story.

Kotz offers another case in point: American Pad & Paper, which Bain bought in 1994. This Indiana plant had so much business,  it was running three shifts a day. The new Bain team fired all 258 workers and had them reapply for their same old jobs at lower wages. Their health-care benefits were cut by 50 percent. The cost-cutting improved the bottom line. Even so, six months later, Bain shut down the plant and shipped the jobs to Mexico.

Want more?  The century-old Armco steel mill in Kansas City, Missouri, was another booming firm with a generous profit-sharing plan for its workers. Romney bought it, combined it with two other companies, and formed a new conglomerate, GS Industries. Armco cost Romney $75 million. He paid $8 million down and borrowed the rest. After the acquisition, he immediately borrowed another $36 million by issuing bonds. This money went to Bain and its investors—i.e. to Romney and his cronies. Romney thus spent $8 million to get $36 million—but left GSI carrying a debt of $378 million. Bain went on charging GSI $900,000 a year in management fees and borrowed $97 million more to retool the plant: a company that had previously made an array of products now made only wire rods. To service the company’s massive debt, the new management cut down on maintenance. They stopped purchasing spare parts and when equipment broke, they rented instead of buying. They also cut funding for the pension plan.

Then GSI went bankrupt. The Armco workers in Kansas City all lost their jobs. The pension plan was so underfunded by then that the bankruptcy court drastically cut the pensions they were hoping to collect, now that they were out of work; even the shrunken payments they did get required that the feds—i.e. taxpayers—chip in $44 million to cover the gap. The bankruptcy did not hurt the people at Bain because they had already pocketed their millions, and the debt belonged to GSI, not to them; they could simply walk away from the corpse.

It turns out that Bain did not, as a rule, buy troubled companies and turn them around with ruthless efficiency and good management. What they often did was to buy profitable companies, use the profitability to borrow money, pocket the borrowed money as dividends, fees, and salaries, and leave the firm dying or dead. Financial writer Josh Kossman disputes the term “vulture capitalism” for such firms because “… vultures eat dead carcasses.” Romney’s Bain, by contrast, sought out healthy companies and fed on them.

To be sure, this is not what happened every time. Some companies bought and run for a while by Bain were still healthy when sold. How many? What’s the ratio? Well, it’s hard to tell because Bain won’t provide a list of the companies it has purchased—and it is not required to by law. But a Wall Street Journal study of 77 known Romney investments seems to show that one-third of the companies he bought ended up foundering and 20 percent of them went bankrupt. Four of the companies that went broke were among Romney’s top-10 moneymakers.

What kind of guy would lead a pack to wrestle down some poor guy and cut off his hair because they thought he was gay—and think of it as a prank? Well–the kind of guy who would operate as Romney did when he headed up Bain—that’s the kind of guy who might do it. (Pete Kotz’s article ran in the Seattle Weekly on April 18, 2012.)

[Back to home]

What Is Money?

Puzzled Musing

 

What Is Money?

 

 

I once ran across a a website called Zeitgeist that was peddling a paranoid conspiracy theory about money and the Federal Reserve and banking in general. The paranoia seemed to stem from the writers’ observation that in the money system as it currently stands, banks create money by issuing debt, thus essentially (it seems) creating money out of thin air. The website saw this as a sinister sleight of hand.

Actually, the equivalency of money and debt matches up pretty closely to what I’ve read in a bunch of books about money–except that economists seem to know and take for granted that money is nothing but debt personified, and are not freaked out by it.

What freaks me out is the fact that the money-system seems so inherently insubstantial: it is nothing but the interconnected faith of many people about what everyone else believes and what they’re going to do based on that belief; it all works when everyone is on the same page, believing together, but when that interconnection breaks down or that faith disappears, the money vanishes. It doesn’t “go somewhere.” It just ceases to exist. Economists all appear to know this, but most, I find, tend to think of gold and silver in a different light–those forms of money are “tangible,” they say. They’re “real money.” And as long as paper money is backed by gold, the experts seem to say, then the paper money is more real.

To me, the puzzling crux of the matter comes in that phrases “backed by.” I’m a know-nothing in the discipline of economics, but as far as I can see, gold and silver are no more tangible than paper in their character as money–that is to say, as personifications of value. What actually and ultimately backs up any currency, whether it’s paper, gold, or conch shells, is real-world economic activity: stuff you can use, activity that produces stuff you can use, ingenuity that contributes to the production of stuff you can use, and above all the interactivity that helps make the stuff you can use more lavish, more complex, and more accessible to all involved.

My brother Riaz once wrote me a letter suggesting that charging interest for a loan is inherently a Ponzi scheme because when it’s time to pay back the loan, that extra money has to come from somewhere and hence[column]

 it must come out of someone else’s pocket–a person who then has to take out a loan (at interest) to cover the expenditure, and so on in an endless, expanding chain. But that’s true only in a barter system.

In any more sophisticated economic system, a system based on credit (and hence debt) when a person borrows money to invest in a productive enterprise, the money grows, or at least it does if the enterprise succeeds in becoming productive. That is, the amount of economic activity and interactivity grows. And if that happens, when it’s time to pay back the loan, there actually is more of the fundamental underlying substance that money represents, the essence of value: more shoes, more food, more services, more exchange of above, etc. In short, there is actually more money.

It seems to me that we writers, artists, and other information-workers are getting pinched right now because the economic system, which is the network of all people amongst whom money is circulating (i.e. who are contributing economic value, and at some later time taking out economic value) are saying, “You can’t join this club, you’re not entitled to receive economic-value from the pool of value we’re creating because you’re not putting any value into the pool–your novel, your short story, your essays and whatnot, have no money value, because we can get all that stuff for free now, thanks to the Internet (and other technologies). As more and more people are told, “You’re not contributing anything we want and therefore you can’t be part of our club,” the club shrinks. And the shrinking of the club = the disappearing of money.

Anyway, that’s how it looks to me.

legalize pot

 

 

Lessons from Amsterdam

 

 

Legalize Pot Already

 

 

Debby and I went to Amsterdam recently and did a number of fun things besides smoke dope legally.

Actually, we never got around to smoking dope legally.

I wanted to. I intended to. There was a “coffeehouse” (as they’re called) half a block down the street from our apartment. Every night, when we dragged our tails home from our vacation festivities, we went past the place. Every morning I said to Deb, “Okay, tonight when we come home, let’s go in there and smoke a joint.” And Debby always said, “Yes, let’s.”

But every night, by the time we got home, I felt too bushed from doing other fun things to get stoned. And a peek into the coffeehouse never inspired me much–it just looked so dead in there: four or five stoners draped on the couches, looking bored. And the owner behind his counter, drearily sifting his product into little tins. He struck me as a small-businessman, struggling to make ends meet in a difficult business. He looked to be in his late forties, a little guy with long hair and a handlebar moustache, a dragged out parody of a stereotypical sixties hippie, which he could not actually have been: he was too young. This look of his was just a costume now. It was the uniform of the job.

The word low-key doesn’t describe the scene. One glimpse and I wanted to do something a little more active, like take a nap. Meanwhile, the bar down the block was hopping. Totally. The door was open, laughter and music were spilling into the street. In there, I saw young people joyously dancing and cheerfully chatting.

I got to thinking about the intense efforts we’re making in this country to keep pot illegal. I mean, really? I pictured the coffeehouses I saw in Amsterdam and thought, “This is what we’re scared of? This is what we’re pouring billions of law enforcement dollars into preventing? We’re putting arresting hundreds of thousands of people a year, and jamming up our jails, and creating a culture of criminality that has come to permeate our whole society–so that places like this don’t pop up in our cities?

For crying out loud! Let’s legalize marijuana and get on with life.

Oh, there might be a surge in dope smoking for a few months–I’ll probably smoke a couple of joints myself–but after that? I’ll bet you anything most people will go back to doing whatever they were doing bfore. The smoking of weed will shrink down to a barely noticeable coffeehouse phenomenon, such as I saw in Amsterdam.

The only noticeable difference will be that prison space will suddenly be available a-plenty, for murders, rapists, home invasion burglars, and thieves. And cops will suddenly have time to chase after those sorts.

The Self as Village

Improving the Inner Village

 The World Health Organization defines health as that state in which a person not only copes with problems but seeks them out. I think of health as the underlying condition that enables a person to live a life of unbroken concentration and absorption. In the well-known Zen parable, Master A says to Master B, “I tread so lightly in the world I can walk on eggs and never break a one. Also, I need only whisper and my disciples hear me from across the river. Oh yes, and I can fly.  What are your miracles?” Master B replies, “My miracle is that when I walk, I just walk. When I eat, I just eat.”

Conventional (Western) doctors typically define health as the absence of disease. Most “alternative health” systems define disease as the absence of health. By that definition, health has degrees. And when you think about it—of course it does.

When I lived in Portland, long ago, I devoted my life to creating and working within collectives and cooperative enterprises. These did not flourish into the way of the world, as I and my cohorts had hoped. But after I left Portland, I found that they had left me a useful legacy, after all. I discovered this when I began to think about reforming my dissipated lifestyle. Yes, it’s true, I ate too much, drank too much, smoked too much, and in fact over-did everything I enjoyed until it was something I didn’t enjoy and then kept doing it.  I was that guy Seinfeld jokes about who is actually two people, morning-guy and night-guy. Another martini will produce a hangover? Why should night-guy care? That’s morning-guy’s problem.

My brain and life were in a state of anarchy and disorder and all my energies were blocked. I knew that in order to gain clarity I had to restore my health. To do that, however, I had to acquire some self-discipline. That summer, I came up with a theory about how to do it. My theory was rooted in the idea that the self is not a single entity but a collection of voices, impulses, and personalities. So the same strategies that make a political process work might be usefully applied to the inner self.

I had long been haunted by the mystery of the inner and outer worlds, by the fact that the cosmos and microcosmos seem to mirror each other, by the striking way the universe of atomic and subatomic particles recapitulates somehow the macrocosmos of stars and galaxies. The state of one’s room invariably reflects the state of one’s head. The biological operations of the body—taking in food, converting it to energy, getting rid of waste—is exactly analogous to the social operations of a community, the tasks of production, consumption, and waste disposal.

I began to wonder about the possibility of applying the political analyses I had formulated during all those years of working on collectives to the problems of reforming the self. In the past, when I sought to impose a discipline on my life, the word “impose” stood out in boldface type. My life had been a see-saw between internal anarchy and internal fascism. When I sickened of dissolution, a law’and’order regime invariably took over, shut down the bars, arrested the dopers, clapped a curfew on the system, and generally came down nasty, reaaal nasty. Will power, you know.

Within a few weeks, inevitably, a popular revolt erupted. The masses within me stormed through the streets of my body, breaking cars and looting stores. Weeks of drunken rioting ensued, leaving me in a state of wrecked exhaustion. Conditions were now again ripe for a fascist coup.

And so I wondered if I could apply any of the lessons I learned from working in collectives all those years. Maybe I could get the many aspects and impulses within my Self to operate as a democratic collective. What I didn’t need was will power, an internal boss, an authoritarian power figure forcing me to follow some program. Maybe I could grow some self- discipline organically by giving heed to all my internal voices and getting them to come to a consensus, popular with all the elements within me.

It more or less worked, I’m happy to say. Not that I am any paragon of self-discipline. If I were I’d be fasting right now, and I’m not. But man, I’m a better man than I was, and I credit my theory of the self as a collective. Details at eleven.

Afghanistan 2012

 

   

In March of 2012, I went to Afghanistan for the first time in 10 years.  That last time, ten years ago, America had recently taken control of the country and the new Karzai government was just getting seated. Now, in 2012, the American occupation is supposedly winding down and there is no telling what the near future holds. Here’s my impression of the country today.

 

From Kite Runner to Blade Runner

 

Ten years ago when I landed in Kabul, it felt entirely familiar, even though I had not set foot in the place for 38 years and even though, in those years, a third or more of the city had been reduced to rubble by 25 years of war.

The Taliban had just fallen then, and the country was said to be in chaos. Stepping out of the plane, however, I felt the dry heat, smelled the silt in the breeze coming off the mountains, and it was just as if I had never left. Apparently that aroma had been lodged in my subconscious as a sense memory without my even knowing, until the same scent hit my nostrils again.

I stepped out of the terminal and oh my God, even though the wreck of a helicopter was laying sideways over there, just off the runway, I recognized the scenery at once. The mountains were exactly familiar. And soon after that, driving through the city in a taxi cab, I saw the same downtown. I saw the pharmacy my father had founded 50 years earlier, still operating, and still called the Ansary Pharmacy, though my family had nothing to do with it anymore; and because of that pharmacy, the intersection was called Ansary Intersection.

The road that ran pasts the old Russian bakery known as Silo used to mark the western edge of the city; and it still did. My uncle’s house was located just off that road; and it was still there. I recognized the same old battered door I had seen just before I left. It all came back to me.

And in the days that followed, I learned that even though the society had been traumatized by the horrors visited upon it, first by the Soviets, then by the civil wars, and then by the Taliban, the same old Afghanistan went on breathing below the trauma. The honey-slow passage of time, the deep sociability, the absence of any such thing as a deadline—it was all still there.

But that was then.

Now, in 2012, when I landed in Kabul again, I recognized nothing. Nothing! Coming out of the terminal, what hit my nostrils was oil and automobile exhaust fumes, not desert pollen in the breeze coming off the mountains. As for those mountains, who could tell what they looked like? Military barricades, barbed wire, and banks of solar panels blocked them from my sight. The terminal was new even though it looked as clangorously old as the previous terminal. I had a long walk ahead to get to Parking Lot C, where the people who were meeting me had to wait: for security reasons they could come no closer.

Officially, I was in Afghanistan to help with the Bare Roots Project, an initiative managed by Asma Eschan, one of those thousands of Afghans of my generation who left the country as a child and grew up in exile in the United States. (Unofficially, I had other agendas, but that’s another story.) The Project buys bare-root saplings from a nursery just outside Kabul and gives them to villages on the outskirts of town or just beyond. The deal is, if the trees are growing when the Project comes back the following year, the village gets that many more trees again.

In 2002, Kabul had been a city of 350,000 with only two traffic lights. In 2012 it was a city of more than five million—but still seemed to have only two traffic lights. Driving in Kabul was a life-or-death cross between stock car racing and demolition derby.

The rubble of bombed-out buildings was gone. The rubble of new construction had taken its place. Half-finished buildings of concrete and rebar rose from streets of mud. Metal freight containers jerry-rigged into housing jammed every conceivable crack of space between gaudy new mansions that looked like extravagant wedding cakes, bristling with chrome colonnades, glass towers, gilt molding, and tiles of many colors. The mansions were surrounded by stout walls topped with barbed wire.

Kabul has numerous TV channels now, with many locally produced shows. One, for example, re-enacted sensational high-profile kidnappings of recent days. The episode I watched chronicled the abduction (and rescue) of a cabinet minister’s son. In ten short years, Kabul had gone from Kite Runner to Blade Runner. (Credit for this phrase goes to Susan Hoffman.)

Beyond Kabul

In my day, the outskirts of Kabul lay two miles from the heart of Kabul. Now we drove for hours and the city never died away. Everyone we saw had cell phones and were busy conducting urgent business with God knows whom. Once I saw a guy talking on a cell phone put that caller on hold and pull another from his pocket to take a second call.

We took trees to places that looked like the villages of my youth: warrens of compounds made of sun-baked mud bricks scattered over bare hillsides, with occasional streams gushing down from snow-capped peaks. All very familiar to me, except that solar panels and satellite dishes glinted from those cobb rooftops.

Amidst all this, it amazed me to discover that the old Afghanistan, that feudal universe of peasant and pastoral nomads, was still in place. People are still riding donkeys, milking cows, and sowing seeds by hand. But onto this 12th century world has descended the expansionist 21st century world of tomorrow, all the gibber and scream of technology and money.

One midnight, during my visit, I got a text message on my cell phone: I should be ready. At 6 a.m. the next day. Someone was taking our group somewhere. The destination was not specified but it had been whispered to me secretly a few days earlier–secretly lest the wrong people hear that we would be on that road and waylay us. We were going to Bamiyan in Central Afghanistan, where the Taliban had destroyed those gigantic Buddhas eleven years ago.

Valley of the Buddhas

The 120 miles journey took eight hours, and we did not experience one drop of danger along the way. Evidently, the road was insecure only for travelers worth kidnapping. Everybody else was just living their life, and their life was all about cows, fields, and flocks–except that even in a craggy valley with no human habitation in sight, I saw power lines bringing electricity from Uzbekistan to Kabul. Even there, believe it or not, I had excellent cell phone service. I could have called my wife in San Francisco, which I can’t do from Virginia Street, two blocks from my house! We went past a guy on a donkey out there, and I saw a laptop peeking out of his saddlebag.

Bamiyan had the typical small-town Afghan bazaar, two rows of room-sized stalls flanking a street with cobble-stone sidewalks. We saw fresh meat hanging off hooks, whole haunches of lamb. We saw fruits and vegetables piled high in baskets woven by local women It looked just like the small town bazaars I remembered from my childhood, even down to the individual vendors dotting the sidewalk, guys on stools vending knicknacks and personal grooming services to pedestrians passing by. In my day, some of those would have been dalaks, barber-dentists.

But when I approached one of these guys, I saw that he wasn’t cutting hair. He had a briefcase-sized solar panel on a stand set next to his stool. The solar panel was attached to a 12-volt car battery. The battery was powering a laptop on his folding table. The laptop had a wireless connection to the Internet. For a small fee, this fellow was downloading songs from websites in Kabul and Peshawar and loading them into people’s cell phones.

This was not my father’s Afghanistan. Or even mine. We drove on to the guest house and one of the guests we found there wasn’t local. He spoke English. His name was Willie. He was from San Francisco — like me.

“What are you doing here,” I asked. He told me he had come to see the Buddhas, and when I told him the Buddhas were gone, he nodded and said he knew. He just wanted to see the place where the Buddhas used to be.

I knew exactly what he was talking about. I was there for much the same reason, it turned out — to see the place where Afghanistan used to be.

Occupy Deconstructed

 

 

(Don’t) Occupy Oakland

 

I keep wanting to say a few words about the Occupy movement, but everything I want to say has been said by someone at this point. Yet the urge to say a few words about Occupy doesn’t abate, so I’ll go ahead and say this much:

I was all in favor of Occupy when it started. Didn’t know exactly what it was going for, but I like the simplicity of its formulation:

Ninety-nine Percent
One Percent.

Yup, I thought, that nails it. That’s the root of our socio-political ills today. And I liked the populist feel of the movement. Like most Americans, I have long been craving a broad-based movement to reclaim this country and its ideals, some movement that the overwhelming majority of us could agree on.

The Tea-Party-thing had the populism, but sadly its analysis was 99% dumb and 1% smart. What’s worse, the one percent of its analysis that was smart was wrong.

The dumb part included all the culture-war stuff as remedies. “Everything will be okay if we can just stop gay people from getting married.” That stuff. Or: “Everything will be okay if we can just get the government to stop women from having abortions.” Or: “Everything will be okay if we can just turn education over the Church.”

And the dumb stuff included the old chestnuts. “Everything will be okay if we can just pry the fingers of government off our Medicare.” Or: “Everything will be okay if we can just cut taxes enough to shut down the government.”

And then there was the debt hysteria. “Too much debt in this country, we all have to cut back, tighten our belts, the government too.” Okay, too-much-debt is a problem. That’s the smart part of the Tea Party analysis. The disparity between the government’s revenues and the government’s expenditures, and the accumulation of government debt offset by government borrowing, is trouble in the making.

Someone should do something.

But who should do it and what should they do? That’s where Tea Party’s analysis veered away from smart into wrong, with a whole lot of dumb sprinkled in. The Tea Party’s idea was (and is): If we all stop spending money we’ll all have more money

There must be a name for the logical fallacy involved in this conclusion, something like Universalizing the Particular. What’s true is that if everyone goes on spending as they have been, then any one person in the system who cuts back on his expenditures will grow richer. When water is flowing, you put a dam somewhere and you’ll create a pool behind it. But the water has to be flowing.

It’s erroneous to conclude from this particular that if everyone stops spending, everyone will grow richer. Yet that’s the core of the Tea Party argument, the part of its analysis that doesn’t offer regressive social ideology as the answer to our ills.

The Occupy movement is based on an entirely different analysis, encapsulated by that simply ratio it calls to our attention: 99 percent/1 percent. That’s brilliant: just four words to nail the flaw in our current ointment. So elegant, so concise, so precise. So true.

I have to admit that when I first saw the phrase, I took it as hyperbolic. Admired the intent but figured the truth was more like 80/20 or so. But recently I saw an article in Newsweek by Nial Fergusson, illustrated with charts and diagrams, one of which showed that half the annual income in America now goes to 1% of the people. The other 99% of us share the rest. (And even there the disparity between top and bottom is dramatic, since 75% of the total income goes to the upper 10%.) I had no idea.

What’s more, those top earners aren’t making their money doing jobs that pay really, really, really, really, really, really, really, really, really, really, really good wages. They earn their money mainly from investments which, as Romney’s tax returns recently dramatized, are taxed at about a third of the rate applied to the sort of income most of us earn, even those of us who are lawyers, doctors, and movie stars.

The salient point is not how many people are getting half the income. It’s the fact that half the income earned in this country–the upper half–is hardly taxed. The other half must be making up the difference. Taxes may be burdensome by taking less from the undertaxed top can only add to the problems of the overtaxed bottom.

Which brings me back to Occupy. The genius of the movement at the start was that it encapsulated this whole complicated tangle in just four unmistakable words, clearing away all distractions the expose the single most important point.

But what has the Occupy movement been doing? In my area, at least–in San Francisco, and even more especially in Oakland–the supposed Occupy activists have been setting up permanent tent cities in public parks, making them unavailable as parks for the general public and turning them into unsavory place for average-income families to bring their kids. These “activists” have trashed small shops in downtown Oakland, breaking windows and driving away shoppers who previously patronized those stores. They’ve shut down the port for periods of time, costing port workers substantial income. They’ve mounted demonstrations that drew police from other neighborhoods, causing crime to spike in those other areas. They’ve argued that they were demonstrating against police tactics, shifting the conversation to “Who started this?” in a quarrel between demonstrators and police, both of whom belong to the 99%. A few weeks ago, Occupy activists swarmed into Oakland City Hall and damaged children’s art works on display there.

What does this have to do with that astonishing ration, the 99 percent and the 1 percent? Do these activists think the one percent hang out in city parks and are suffering now because Occupy Oakland activists have cut off their access to these parks? Do they think small shop owners aren’t part of the ninety-nine percent? Do they picture the leading one-percenters huddling in fear and whispering, “Oh my God, we’ve got to start sharing the wealth before those guys in Oakland trash another children’s art show”?

The only productive actions are those that keep the public focused on the ratio originally summarized by those four words. Ninety-nine percent. One percent. Only thus will we start steering toward political solutions that address that one bottom-line fact (from which flows so much else.)

Creating divisions within the 99%, arousing hostilities that pit people from one level of the 99% against people from another level of the 99% sabotages the very thing that made Occupy such a powerful idea. I can’t imagine any course of action better suited to the interests of the one percent than sowing clamorous divisions within the 99 %. If I were a Conspiracy Theorist, I would be looking for evidence now that the Occupy Oakland activists are minions of the one percent. I would be asking how much they were paid to do these things they’re doing, and I’d be trying to discover where the secret payments were deposited.

But I am not raising these points, because I don’t believe in Conspiracy Theory. What I do believe in is stupidity and venality. These, unfortunately, are the secret forces that undermine the best laid plans of the noblest idealists. These are the only charges I’m bringing against Occupy Oakland: stupidly and venality.