Trick or Treat?

No comments
Halloween postcard, ca. 1900-1910.
Kevin Kenny

Halloween is a Celtic festival, imported to America, and later re-exported to Europe, pumpkins and all.

The word Halloween is a contraction of All Hallows Evening—the eve of All Hallows Day (or all Saints Day) Day. October 31 tends to be a boisterous occasion, whether in Boston (even without the World Series) or in the U.S. Southwest and Mexico, where it kicks off the festival of el dia de los muertos.

In the Christian calendar, All Hallows Day, on November 1, was the day to remember saints and martyrs. All Souls Day, on November 2, was dedicated to all the departed faithful awaiting entry into heaven and hence in need of prayer.

As with most Christian holidays, the Church carefully overlaid the “days of the dead” on top of an earlier pre-Christian festival. Just as Christmas marks the winter solstice and Easter the onset of spring, Halloween was timed to coincide with a Celtic festival celebrating the end of the harvest.

The Protestant English overlaid a different holiday on the old harvest festival. They celebrated, and continue to celebrate, Guy Fawkes Night, the anniversary of a foiled attempt to blow up parliament on November 5, 1605. This holiday caught on in colonial New England, where Halloween was not widely celebrated until the onset of mass Catholic immigration in the mid-19th century.

Most Catholic immigrants at this time were German or Irish. Germany does not have a long Halloween tradition; the holiday celebrated (and sometimes lamented) there today is an American import. Ireland, by contrast, has a robust and deep-rooted tradition. The old Celtic harvest festival, known as samhain (after the Gaelic word for November), was a wild affair in Ireland.

On an Irish Halloween in the 19th century, the children stayed safely at home. It was the adults who went out in disguise, parading from door to door. There was no end of frolicking, cultural play, and social inversion—the world turned upside down—but it had a distinctly ominous undertone.

It was a short step from the rituals of Halloween to the choreographed violence of rural secret societies, among them the Whiteboys, the Lady Clares, and the Molly Maguires, all of them men disguised in women’s clothing.

Children today rarely feel a need to enact a “trick,” having grabbed a “treat.” In the past, however, the trick might have been a threat, a warning, a beating, or an arson attack. Perhaps even an assassination. The treat forestalling the trick was a concession by a landlord, his agent, or a hated neighbor. Half-yearly rents were due on October 31, adding to the tension. Over time, on both sides of the Atlantic, Halloween became a children’s celebration. It’s probably just as well.


Kevin Kenny is Professor History at Boston College. He is author of Diaspora: A Very Short Introduction (Oxford University Press, 2013).

No comments :

Post a Comment

Happy Halloween

No comments
Miniature of Jason Voorhees from Friday the 13th at Cockington Green.
Steven Cromack

Halloween (1978) and Friday the 13th (1980)—both more than thirty years old—are now America’s classic horror movies. Why do they enjoy such prominence in American culture? The answer rests in the content of their plots and the context in which they were produced. 

John Carpenter’s Halloween catapulted the slasher film into American culture with its release in 1978. In 1974, before he made A Christmas Story, director Bob Clark made Black Christmas, a story of young adults alone in a secluded area, ready to be terrorized, an all but standard plot line in horror movies today. But Halloween’s central character Michael Myers was something new. He was not just a serial killer, but one who stalked his victims with creepy music in the background. And he was seemingly indestructible. Never before had such a character existed in cinema. Not too long after Halloween came Friday the 13th, another movie about an indestructible evil being that slaughters all in its path.

On a deeper level, these films perhaps resonated with the psyches of Americans who lived through the 1970s, America’s “long national nightmare” and its “crisis of confidence.”  It was the era of uncertainty, a time of stagflation, the transformation of social institutions and mores, and the Vietnam War. Americans learned from the Pentagon Papers that their government had lied to them. The cinema of the 1970s reflected this cynicism and despair. It is not a coincidence that this was the era of The Poseidon Adventure (1972), The Towering Inferno (1974), and Jaws (1975). The disaster films of the 1970s offered viewers the thrill and spectacle of normal life upended along with the relief of escape and survival. America’s mood during the 1970s guaranteed the popularity of Halloween. Following in the footsteps of Michael Myers came Jason and then Freddy Krueger. The list of slasher films since Halloween is extensive, and these movies have become a regular part of America’s Halloween culture.



No comments :

Post a Comment

Newport Stories: To Preserve or Not to Preserve, or, On the Million-Dollar Question about Newport’s (and All) Historic Homes

No comments
[Here is the fifth and final installment of a series of posts by Benjamin Railton that originally appeared on his blog AmericanStudies.]

Like so many evocative American places, the Newport, Rhode Island mansion The Breakers contains and connects to numerous histories, stories, and themes worth sharing. So in this series, I’ll highlight and analyze five such topics. As always, your thoughts will be very welcome too!

The Breakers.
I was pleasantly surprised by the quality, depth, and breadth of the self-guided audio tour at The Breakers—that tour, to be clear, provided starting points for all five of this week’s blog topics—but was particularly taken aback, in a good way, by a provocative question raised right at the tour’s outset. The narrator asks directly whether preserving mansions like The Breakers is a worthwhile pursuit for an organization such as the Preservation Society of Newport County—whether such mansions are architecturally or artistically worth preserving, whether they are historically or culturally worth remembering, whether, in short, these kinds of homes merit the obvious expense and effort that are required to keep them open and accessible to visitors. The tour presents arguments on both sides of the question, and leaves it up to the listener to decide as he or she continues with his or her visit.

Of course my first instinct, as an AmericanStudier, as a public scholar, as a person deeply interested in the past, was to respond that of course we should preserve such historic sites. But if we take a step back and consider what the question would mean in a contemporary context, things get a bit more complicated. Can we imagine a future organization spending millions of dollars to preserve Donald Trump’s many homes? Oprah Winfrey’s Lake Como getaway? Bill Gates’s estate? Certainly I can imagine tourists a hundred years hence being interested in visiting those places—well, hopefully not the Donald’s homes; but yeah, probably them too—but is that a sufficient argument for them to be preserved? Or does there indeed have to be something architecturally or artistically significant, or something historically or culturally resonant (beyond their owners’ obvious prominence), to merit the preservation of a private home? And do these “white elephants” (as Henry James famously called them) make the cut?

The question thus isn’t quite as simple as I had first imagined. But my own answer would, I believe, be to point precisely to the topics covered in this week’s blog posts. A site like The Breakers is the repository of so many compelling and vital American histories and stories, so many moments and identities that can help us understand and analyze who we’ve been and who we are. Of course there would be ways to remember and tell those stories without preserving the house, but I do believe that historic sites provide a particularly effective grounding for them, a starting point from which visitors (like this AmericanStudier) can continue their investigations into those themes. I know that my own ideas about America were expanded and amplified by my visit to Newport and The Breakers, as they have been by all my AmericanStudies trips. So while I know it’s not entirely practical, I vote for preserving anything and everything that can help with such ongoing and inspiring AmericanStudying.

Ben
P.S. What do you think?

No comments :

Post a Comment

Newport Stories: Alice and Alva Vanderbilt

No comments

[Here is the fourth installment of a series of posts by Benjamin Railton that originally appeared on his blog AmericanStudies.]

Like so many evocative American places, the Newport, Rhode Island mansion The Breakers contains and connects to numerous histories, stories, and themes worth sharing. So in this series, I’ll highlight and analyze five such topics. As always, your thoughts will be very welcome too!

Alva Vanderbilt, 1883.
At the same time that Cornelius and Alice Vanderbilt were building The Breakers, Cornelius’s brother William and his wife Alva were completing their own Newport mansion, Marble House. Located just down the street from each other, these two Vanderbilt homes jointly exemplified and dominated late 19th-century Newport society, and it’s easy to see the two women as similarly parallel. Yet the two marriages ended in very different ways—Cornelius died suddenly in 1899, at the age of 56, and the widowed Alice lived 34 more years but never remarried; Alva controversially divorced William in 1895 and married the younger Oliver H.P. Belmont, moving down the street into his home Belcourt Castle—and those events foreshadowed the two women’s increasingly divergent trajectories.

Both Alice and Alva would continue to play significant roles in Newport and New York society for their more than three remaining decades of life, but in dramatically different ways. Alice, known as the dowager Mrs. Vanderbilt, made her New York and Newport homes the social centers for which purpose they had been built, donated philanthropically to numerous causes (including endowing a building at Yale and one at Newport Hospital), and generally maintained her traditional, influential, powerful high society status. Alva, on the other hand, forged more pioneering and modern paths: her passion for architecture led her to become one of the first female members of the American Institute of Architects; her dissatisfaction with the highly traditional New York Academy of Music led her to co-found the Metropolitan Opera; and, most tellingly, she became one of the most active and ardent supporters of women’s suffrage, forming the Political Equality League, establishing the National Women’s Party, and working with Anna Shaw, Alice Paul, and other luminaries to help ensure the passage of the 19th Amendment.

From an early 21st-century perspective, Alva’s path seems the far more influential, impressive, and inspiring one; whatever we think of her architectural and musical endeavors (and they were certainly important), few 20th-century American achievements were more significant and lasting than women’s suffrage, and Alva’s efforts played a meaningful role in helping effect that change. But I think it would be a mistake to discount all that Alice did and accomplished in those thirty-plus years after her husband’s unexpected death, and the legacy that her efforts likewise left behind. Indeed, Alice’s independent and influential life offers an implicit but compelling argument for women’s social and political equality and for how much every American has to offer his or her society and era. Without the presences and contributions of both of these women, far more than just Newport society would have been impoverished.

Final Breakers story tomorrow,

Ben
P.S. What do you think?

No comments :

Post a Comment

Newport Stories: On the Vanderbilt Heiress Whose Seemingly Stereotypical Life Belies a Far More Individual Identity

No comments

[Here is the third installment of a series of posts by Benjamin Railton that originally appeared on his blog AmericanStudies.]

Like so many evocative American places, the Newport, Rhode Island mansion The Breakers contains and connects to numerous histories, stories, and themes worth sharing. So in this series, I’ll highlight and analyze five such topics. As always, your thoughts will be very welcome too!

Gertrude Vanderbilt Whitney, 1916, by Robert Henri. 
Just in case Gertrude Vanderbilt (1875-1942), eldest surviving daughter of Cornelius Vanderbilt II and his wife Alice Gwynne Vanderbilt, didn’t seem to have enough of an elite American legacy on which to live, she went ahead and married Harry Payne Whitney (1872-1930), son of a famous attorney, grandson of a Standard Oil executive, and heir to a sizeable fortune in his own right. Together the two expanded upon those impressive starting points, inhabiting a New York mansion of their own, becoming prominent racehorse breeders, world travelers, and art patrons, and, in a Gospel of Wealth moment for Depression-era America, endowing the New York Whitney Museum of American Art just before Harry’s death in 1930.

If we see that latter act as simply the kind of thing super-rich people do with their money toward the ends of their lives, however, we miss a far more intimate and lifelong factor. Gertrude apparently had a strong affinity and passion for the arts from a young age, but the Vanderbilts’ society in New York and Newport did not seem to present her with opportunities to act upon those perspectives. After her 1901 marriage, both because of the greater degree of independence it afforded her and because (it seems) Harry supported her efforts, she finally found such opportunities: organizing and promoting women artists, individually and in exhibitions; and studying art and sculpture in her own right. She went on to achieve a career as a public sculptor, creating for example a fountain in the famous patio of the Pan American building in Washington, DC. One of her works is even housed in the Whitney Museum—not because of her last name, but because it merits inclusion in such a space.

Rebecca Harding Davis’s novella Life in the Iron-Mills (1861) focuses on the tragic life and death of Hugh Wolfe, a factory worker whose talent for sculpture goes unappreciated and unrewarded in his grimly realistic environment. While Hugh is a fictional character, the point is real and important: that whatever limitations Gertrude Vanderbilt faced on her way to a successful artistic career, her family and status also certainly provided possibilities that the Hugh Wolfes of the world are far less likely to find. But on the other hand, Gertrude also represented a new, modern American woman—one who not only pursued and achieved her own artistic career, but who at the same time supported the careers and art of her peers and her nation. That her money helped her to do so is an unquestionable truth for which all who visit the Whitney should give thanks.

Next Breakers story tomorrow,

Ben
P.S. What do you think?

No comments :

Post a Comment

Newport Stories: The Omelet King, or, On the Very American Story--in Some of the Best and Worst Senses--of Rudy Stanish

No comments
[Here is the second installment of a series of posts by Benjamin Railton that originally appeared on his blog AmericanStudies.]

Like so many evocative American places, the Newport, Rhode Island mansion The Breakers contains and connects to numerous histories, stories, and themes worth sharing. So in this series, I’ll highlight and analyze five such topics. As always, your thoughts will be very welcome too!

While I hope that yesterday’s post complicated some of the simplest narratives about a figure like Cornelius Vanderbilt II, it was nonetheless, I admit, still pretty crazy to use the phrase “rags to riches” to describe Commodore Vanderbilt’s grandson. But how about Rudolph “Rudy” Stanish, who began life as the seventh of thirteen children born to an Eastern European (Croatian and Serbian) immigrant couple in Yukon, Pennsylvania, and ended his life as the world famous Omelet King, chef to some of America’s most prominent people and families? A young man who was brought to Newport’s mansions before he was 16 (in 1929) to work as a kitchen boy with his godmother, and who through a combination of talent, hard work, luck, timing, and more found himself making John F. Kennedy’s inaugural breakfast? Yup, I’d say that just about defines rags to riches.

The principal question when it comes to such rags to riches stories has never been whether they’re possible at all, however, but whether they’re representative of something larger than their singular existence—whether, that is, they offer any sort of more general blueprint for success. As part of the audio tour of The Breakers (which is not where Stanish began his career but where he received his first big break, filling in as head chef at the last minute for a dinner party and impressing the hosts sufficiently to stay on) Stanish is quoted as saying precisely that his story was indeed exemplary; that in a world like that of The Breakers, those among the servants (“The Staff,” as the Vanderbilts insisted on calling them) who worked hard and gave it their best and, yes, were gifted at their jobs could make their way to something far beyond the cramped and hot upstairs quarters where they lived at The Breakers. And it’s hard to disagree: without at least the possibility of such mobility, more than just Stanish’s own story would lose a good bit of its appeal—the story of America would as well.

But even if we accept that Stanish’s story is not only individually possible (which of course it was) but communally achievable, there remains at least one other significant criticism that can be levied against such stories: they are not so much narratives of meritocracy, of opportunity, or, even more radically, of challenges to the existing hierarchies of wealth and class as they are reflections of our society’s unquestionable emphasis on and celebration of fame. That is, Stanish became famous for how well he served the nation’s powerful elites. But even if that fame granted him a place among those elites, it neither equated his identity with theirs nor (especially) led to any questions about the world in which they all operated. To be clear, that’s not the role of any individual, and I’m not critiquing Stanish in any way. But if his story is a uniquely American one, it is at least in part because it highlights the often superficial nature of success in our society.

Next Breakers story on Monday,

Ben
P.S. What do you think?

No comments :

Post a Comment

Newport Stories: Cornelius Vanderbilt II, or, On Whether a Child of Privilege Can Also Be a Horatio Alger Story

No comments
[Here we present the first installment of a series of posts by Benjamin Railton that originally appeared on his blog AmericanStudies.]

Like so many evocative American places, the Newport, Rhode Island mansion The Breakers contains and connects to numerous histories, stories, and themes worth sharing. So in this series, I’ll highlight and analyze five such topics. As always, your thoughts will be very welcome too!


Cornelius Vanderbilt II (1843-1899), the man for whom The Breakers was built (as perhaps the most luxurious “summer cottage” in human history), was named after his grandfather, Commodore Cornelius Vanderbilt (1794-1877), who at his death was the wealthiest man in the United States. Which is to say, young Cornelius wasn’t just born into privilege; he was perhaps the closest thing to the royal baby American society has produced. Moreover, over the thirty-four years between his birth and his grandfather’s death, a period that culminated quite tellingly with the start of the Gilded Age, the family’s fortune only increased further. None of that is young Cornelius’s fault, and if he had decided to give the fortune away he’d have been about the first person ever to do so—but it does make it hard to see him as anything other than the scion of an American dynasty.

Yet as illustrated at length by Cornelius’s entry in Appleton’s Cyclopedia of American Biography (1900), the young man’s life did in some interesting ways mirror those of a Horatio Alger, rags to riches, self-made protagonist (without, of course, details like being orphaned or living on the streets). Beginning at the age of 16, Cornelius spent the next five years working as a clerk in two small New York banks, learning the ins and outs of the financial world; he then did the same with the railroad industry in which his family had made their fortune, working for two years as treasurer and then ten as treasurer of the New York and Harlem railroad company. Which is to say, when he became vice president of that railroad in 1877, at the age of 34, he did so after nearly thirteen years in the industry, and more than twenty in financial services; while it’d still be fair to say that he had been destined for the position and role from birth, it certainly would not be accurate to argue that it was in any blatant or nepotistic sense handed to him.

So what? you might ask. Do those years of work make the egregious excess, the truly conspicuous consumption of The Breakers less grating or more sympathetic? Do they in any way complicate Cornelius’ status as the poster boy for Gilded Age inequities? I don’t know that they do—but I do know that they remind us of the complexities, nuances, contradictions, the messy dynamic humanity, at the heart of most every American identity and life, story and history, individual and community. It’s entirely fine—and, I would argue, an important part of a public AmericanStudier’s job—to critique what we see as the worst actions or attributes of historical figures like the Vanderbilts. But it’s not at all okay to do so by oversimplifying or mythologizing (in positive or negative ways) lives and identities, by turning the past into the black and white caricatures that such myths demand. Cornelius Vanderbilt II was a scion of privilege who built one of the most garish mansions in American history; he also worked, and apparently worked hard and well, for forty of his fifty-six years of life. That’s all part of the story of The Breakers for sure.

Next Breakers story tomorrow,

Ben
PS. What do you think?

No comments :

Post a Comment

Joseph Amato on Local History and the Decline of Rural America

No comments
Joe Amato, emeritus professor of history at Southwest Minnesota State University, is a prolific and creative scholar. He has published eye-opening books on unique topics (his most recent is Surfaces: A History). He is also a veteran practitioner of the kinds of history most avidly pursued by non-academics: genealogy and local history. The January, April, and June issues of the 2013 volume of the Historical Society's bulletin Historically Speaking feature a three-part essay series by Amato, "Place and American History," where he ruminates on these seemingly mundane--though in his hands anything but--historiographical genres. Recently Amato spoke to South Dakota Public Radio's Nathan Puhl about these essays. You can listen to their conversation here.

No comments :

Post a Comment

Unto the Sea Shall Thou Return: Boston, 2050

No comments
Chris Beneke

On my office wall, I have a replica 1775 Boston map. It looks like this:

Zoom view
Boston, 1775
As I stared idly at it last week, I was suddenly struck by how much the drawing resembled other maps that I'd seen recently, such as this one:

These Scary Maps Explain What Sea Level Rise Will Mean in Boston
Projected map of Boston after a 5 ft. seal level rise (coupled with another 2.5 ft. storm surge.)
If you concentrate only on the grey areas above, you should discern (as I finally did) an eerie resemblance to the 1775 map at the top.

I wasn't the first to notice the similarity. The Atlantic ran a piece last February that I missed, and perhaps you did as well. There, Emily Badger noted the resemblance between Boston in the 1640s and the exceedingly damp, post-global warming projections of what it will look like in 2050, 2100, etc.

By dint of massive and repeated landfills over the last two centuries, Bostonians have doggedly claimed areas that once belonged to the sea. Before this century is over, the sea may be taking many of them back.

No comments :

Post a Comment

What Blogging, Twitter, and Texting Do for the Historian's Craft

No comments
Heather Cox Richardson

For all their new applications, new technologies are also good for the old-fashioned craft of history. They are excellent for honing our writing skills.

First of all, blogging and tweeting require very low investments of professional energy. There is something daunting about starting A Book. I often panic when I face a new project, because I simply can’t remember how to begin. What do you write first? How do you set everything up? Do you write an introduction? And on and on and on. For days. Everything seems Very Important.

Andrew Sullivan, an early and influential blogger.
Blogging, though, requires none of that. It is an exercise in brevity, centered on a single idea. It is not intended to Sit On A Shelf Forever. It doesn’t have to be Brilliant. It has to get done, and done quickly. So stepping over the threshold is easy. It’s fun. It’s a good way to rev up your engines to carry into the day’s more daunting projects.

Blogging also forces your writing up several notches. It has to convey an idea clearly and, with luck, engagingly. Those are not necessarily skills professional historians practice very much. We tend to fight over arcane theories and dig so deeply into our research that we lose all but a few other specialists. Blogging forces you to distill complicated ideas into crucial points, and then to communicate those points in such a way that a nonspecialist can understand. (Twitter and texting have similar value. Never is the importance of strong verbs more clear than when tweeting. You MUST use short, powerful verbs to keep ideas within 140 characters. Don’t believe me? Follow @JoyceCarolOates.)

Blogging and tweeting also let a writer develop a personal style that is terribly hard to find in academic writing. The form is much more epistolary than academic argument, and that very informality means that much more of your own quirks come out, which can bring your online prose to life. Blogging lets you develop a sense of humor in your writing. Hell, it encourages you to.

And that is maybe the key to why blogging and tweeting are good for the historian's craft. They’re fun. They let us love what we do on a daily basis. We can play with words and new ideas without committing to them for months or years. We get to share our enthusiasms with a different audience, and learn new perspectives and get new ideas to keep our work fresh. There is a whole wide world of terrific historians out there, both in and outside of the academy, blogging about American menus, and things that show up in historical romances, and American cultural icons, and on art history, and so on across an incredible range of topics. Getting to spend time in their company is a gift.

No comments :

Post a Comment

A Tense October

No comments
Heather Cox Richardson

United States Department of Defense graphic in the John F. Kennedy Presidential Library and Museum, Boston.

The Cuban Missile Crisis lasted for thirteen tense days in October 1962. During those days, the USSR faced off against the U.S. over the placement of Soviet missiles in Cuba. President Kennedy vowed not to permit missiles so close to America; Premier Krushchev vowed to put them in Cuba
anyway. The two week standoff was the closest the Cold War ever came to erupting into a hot war.

And with nuclear weapons widespread, it would have been a hot war, indeed.

The J.F.K. Library has put together a website that enables a viewer to experience each day of that crisis.



No comments :

Post a Comment

When History Hits Home

No comments
Elizabeth Lewis Pardoe


In my job as a fellowships advisor I stress to applicants that a strong application demonstrates three things: depth of expertise in a given field; breadth of interests and experiences; and the applicant’s change over time, i.e. history. The third pillar of the triad constitutes the all-important biographical element. Americans' hunger for personal history makes Ancestry.com’s stockholders rich and guarantees the flood of genealogists to the shores of Salt Lake.

The Ball family of Lompoc, California, ca. 1894. 
My sons started high school and middle school at the end of August. As I delivered my nearly­ men to the doors of their new institutions, my desire to make history from memory overwhelmed me. The hundreds of digital images stored on the family computer cried out for a chronology with which to capture the evolution of my chubby-­cheeked chappies into the skinny tweens who seek escape from their mother’s needy embrace.

My compulsion has since produced three photo­books, and I am at work on two more. My mother used to assemble such tomes from the fragile prints of an earlier age. My photo­books exist in a cyber “cloud” as well as on my coffee table. My ability to shape the past into the form I like resembles the easier editing now accessible to authors of all history books. While David Hume wrote by hand and Oscar Handlin clattered at a keyboard, post­modern respondents to Clio’s call can, in a matter of seconds, place images on pages and produce flourishes that only the most able medieval monks could make over many days.

I love it.

The same woman who sat resentfully in archives and waited for genealogists to get off the microfilm machine so she could do her “real” research now neglects 18th-century documents for 21st-century snapshots. This history is my most real. I lived it. I interpret it for my own progeny. My intimate ties to my family finally managed to outstrip my intellectual engagement with the past.

Change over time dominates a narrative in which few words appear. Twenty pages of pictures are equivalent to a tome like Albion’s Seed. The calculus premised on one picture bearing the same value as one thousand words integrates images into narratives with frightening speed. The Halloween and Christmas albums bring back my sons’ obsessions with Thomas the Tank Engine and Star Wars that seemed interminable in the moment but flashed by faster than Lightning McQueen in retrospect. The backdrops document home improvements and gardening experiments since forgotten. Relationships morph from page to page as erstwhile trick­-or-­treaters move away, children grow, and grown­ups go gray.

I relish the lifeways of past peoples when a fine historian like David Hackett Fischer resurrects them for his readers. I revel in remembering the lifeways of my own little family in myriad private moments and wonder if any historian not yet born could relay them to strangers on our behalf. All the elements exist: the depth of our affection, the breadth of our interests, and their evolution over time. One day my sons might make lovely fellowship applications from the raw materials of their shared past. Could anyone else? Would I want them to try?

The final question frightens this historian, who puts the personal tragedies of the long dead into public view. When I assemble an album, have I issued an invitation or proven the impossibility of my profession?

No comments :

Post a Comment

Soccer: An American Sport

No comments
Brian D. Bunk

Chicago, 1905. SDN-004085, Chicago Daily News negatives collection, Chicago History Museum.
One sure way to irritate historians of soccer in the United States (and yes, there are a few) is to call the sport foreign. In various forms the game has been played by Native Americans and Puritans; factory workers and college students; and professionals and preschoolers throughout American history. Why then does this idea persist? The reasons are complex, but one important factor is that the mainstream sporting press, especially in the second half of the 20th century, has continually depicted soccer as a foreign game.

Anyone who watches ESPN, the nation’s dominant sports network, probably already realized what the website Deadspin.com [http://deadspin.com/what-i-learned-from-a-year-of-watching-sportscenter-5979510] quantified for 2012: the majority of airtime on the channel’s signature news program Sportscenter focused on just three leagues, the National Football League, National Basketball Association,  and Major League Baseball. Soccer meanwhile earned just 1.3% of the total and most of that miniscule amount was devoted to international soccer. Of course ESPN has only been on the air since 1979 and for many years was not nearly as influential as it is today.

Instead fans relied on printed media for sports news, and one of the leading periodicals over the last six decades has been Sports Illustrated. I spent some time examining how many times and in what context the editors put soccer on the cover. Since the first appearance of the weekly in August 1954 through to August 2013 soccer has made the cover just sixteen times. It’s not just the absence of the sport that’s significant but also the context in which it has been portrayed. Fifteen of the sixteen soccer covers have featured foreign players, foreign locations, or U.S. national team players, both men and women. The individuals include Pelé (Brazil), Giorgio Chinaglia and Mario Balotelli (Italy), Daniel Passarella and Diego Maradona (Argentina), and David Beckham (England).  Another showed Brazilian players celebrating their World Cup triumph in 1994 and in 2010 a cover story promised to explain “what soccer means to the world.” Each of the six times U.S. national team players made it to the front they were featured in their roles as representatives of the country in international competitions. In a few cases individuals playing in major American professional leagues have graced the cover: Pelé and Chinaglia from the New York Cosmos of the North American Soccer League (NASL) in the 1970s and David Beckham for the L.A. Galaxy of Major League Soccer in 2007. Notice that although the leagues were based in the U.S., the players featured were born overseas. Only once in all the decades of SI has an American-born player, playing in an American league appeared on the cover. Curiously it was also the first soccer cover in the magazine’s history: goalkeeper Bob Rigby of the Philadelphia Atoms (NASL) on September 3, 1973. The history of soccer in the United States is much longer and richer than most people realize, and we shouldn’t allow coverage of the sport in the United States to mask its true social and cultural significance either today or in the past.

Brian D. Bunk is a historian at the University of Massachusetts, Amherst and creator/host of the Soccer History USA Podcast.

No comments :

Post a Comment

Grand Banquet at Delmonico's, New York City, 1880

No comments
Henry Voigt


In the preface of his 1894 cookbook The Epicurean, Delmonico’s chef Charles Ranhofer cited seventeen grand banquets as being particularly memorable.1 One of these dinners had been held fourteen years earlier for Count Ferdinand de Lessep, the French entrepreneur who built the Suez Canal. Eager to replicate his engineering feat, De Lesseps came to New York in March 1880 to raise money for a sea-level canal that would cut across the Isthmus of Panama. As was customary, a banquet was held in his honor. However, as far as the French-born chef and his brigade were concerned, their famed countryman was more than just another special guest. To them, he was a hero of the age. Observing a palpable excitement in the air during dinner, the reporter from the New York Times wryly noted that “the nationality of the distinguished guest of the evening had had something to do with the zeal of the cooks, confectioners, and waiters.”2

On New Years Day of that year, De Lessep symbolically began work on the canal by digging the first shovel of sand and dumping it into a Champagne box. Transoceanic waterways were regarded as essential to expanding free trade, and this was reflected by De Lesseps’ family motto Aperire Terram Gentibus, meaning “To Open the World to all People.” This motto, which was adopted by the Compagnie de Suez, appeared on the menu, as well as the admission ticket shown above. Although the United States liked the idea of free trade in theory (and in practice when it favored the country), the prospect of a French canal in the Western Hemisphere caused consternation in Washington. Remembering that it had only been a decade since Napoleon III tried to make Austrian Archduke Ferdinand Maximilian the emperor of Mexico, President Rutherford B. Hayes was suspicious of this project, bluntly declaring that “a canal across any part of the Isthmus is either a canal under American control, or no canal.”

Despite the political posturing, the banquet for De Lesseps was fully subscribed by 250 wealthy and powerful men, including the industrialist Andrew Carnegie. Constructed in a manner befitting this august assembly, the bill of fare was engraved on a gilt-edged card that was then attached to a fringed swatch of satin with a silk ribbon.3 This expensive menu design, which was in vogue in New York for about five years during this period, was reserved for only the most important banquets. The blue menu shown below is in the NYPL collection.




The menus at the dinner appeared in various colors, as evidenced by this brown card tatted on maroon satin in my collection.



The lavish bill of fare includes some of the typical American foods that were de rigueur in the nineteenth century, such as oysters, and Canvasback duck which invariably appeared at the finest banquets. In addition to these customary fixtures, there are several particularly rich dishes that were seen much less often. One example is supremes de volaille Lucullus, a chicken entree named after Lucius Licinius Lucullus, a prominent epicurean of the late Roman Republic. After baking a buttered chicken breast in paper, this entree was garnished with truffles, “tongues balls” (comprising half-inch spheres of finely-chopped beef tongue), and capon kidneys, mixed with a béarnaise sauce that was finished with meat glaze.



Another calorific dish is filet de bœuf Rossini. Named after the opera composer Gioachino Rossini, this was a filet of beef topped with hot foie gras and a slice of truffle, and served on buttered spheres of bread with a truffled Madeira sauce; it was the sauce that gave these dishes a dimension now called “big flavor.” This classic dish, known as tournedos Rossini in the twentieth century, became a standard at French restaurants in the United States during the 1950s, and stayed in fashion until the mid- to late-1970s, when sauces were lightened in accordance with the dictates of nouvelle cuisine.4

Following the usual protocol, dinner was served over the course of two and a half hours, beginning at 7:00 in the evening. At 9:30 PM, the six long tables were cleared of the large decorative sculptures of confectionery that had been skillfully crafted for this occasion, depicting Monsieur De Lesseps and the triumphal moments in his life. Once all the diners had an unobstructed view of the dais, except for the haze of wisteria-colored cigar smoke that typically hung in the air at these convivial gatherings, it was time for the speeches to begin.

Shortly after this banquet, De Lesseps went to Washington to meet with President Hayes and testify before Congress. His canal project proceeded, but eventually failed, devastated by epidemics of malaria and yellow fever and recurrent landslides. The bankruptcy sparked a scandal in France. De Lessep was ordered to pay a fine and even came close to being thrown in jail. He died in 1894, a few months after Chef Ranhofer published his 1183-page cookbook, revealing how he prepared the grand banquets of the Gilded Age.5

This originally appeared on September 30, 2013 on Henry Voigt's blog The American Menu.

Notes
1. Charles Ranhofer, The Epicurean (New York, 1894). 
2. New York Times, March 2, 1880.
3. The menus were probably made by Borden & Cain, a downtown stationer at 46 West Broadway that coincidentally printed the daily à la carte menus for Delmonico's.
4. New York Times, February 28, 2012.
5. Fifteen years after the De Lesseps banquet, the bill of fare was reproduced again in the New York Times, appearing in a Sunday article titled "Fine Dinners in New York" on November 10, 1895.


No comments :

Post a Comment

Baseball’s Forgotten Experiment

No comments
Steven Cromack

Contrary to popular belief, Jackie Robinson was not the first black man to play major league baseball. That title belongs to Moses Fleetwood Walker, who lived and played nearly eighty years before
Robinson. Walker’s story is fascinating not only because of his baseball stardom, but also because an all white jury acquitted him of first-degree murder in 1891.

Historians do not know much about his early years. Moses Fleetwood Walker was born in Ohio. A minister’s son, he entered Oberlin College and planned to become a lawyer. While at school, however, it became clear that his passion lay elsewhere. Instead of going to class, Walker played baseball, and in 1883 he landed a spot in the minor leagues as a catcher with newly formed Toledo Blue Stockings. As a player, he impressed the press and the fans. Sporting Life, the nation’s largest sport publication at the time, wrote on September 15, 1883: “Walker, the colored catcher of the Toledos, is a favorite wherever he goes. He does brilliant work in a modest, unassuming way.” In 1884 Toledo joined the major competitor to the National League, the American Association. As a result, Walker earned his title as the first black player in the major leagues.

Unfortunately for Walker, tension between his teammates, unrelenting jeers from fans, and an injury led to his release. He bounced around the minors for a while, eventually ending his career in Syracuse. In 1891, upon leaving a bar in that city one afternoon, Walker encountered a group of white men, one of whom threw a rock at him, while the rest surrounded him. In a panic, Walker stabbed the closest man. He was indicted for first-degree murder. Amazingly, the all white jury found him not guilty.

Walker spent the rest of his life writing a treatise advocating the return of black Americans to Africa. He died of pneumonia in 1924 and until 1996 lay in an unmarked grave. America was not ready for Walker. C. Vann Woodward wrote in his Strange Career of Jim Crow that America in the 1880s was: "The twilight zone that lies between living memory and written history . . . . It was a time of experiment, testing, and uncertainty—quite different from the time of repression and rigid uniformity that was to come toward the end of the century. Alternatives were still open and real choices had to be made."

Instead of embracing integration, baseball’s managers drew the color line and Walker was forgotten. It was not until the 1980s when historians uncovered Walker’s story. For further reading, see David Zang’s Fleet Walker’s Divided Heart: The Life of Baseball’s First Black Major Leaguer.

No comments :

Post a Comment

The Historical Roots of the Evangelical Adoption Boom

No comments
Arissa H. Oh

 International adoption is in the news almost daily, but its numbers are in decline and the tone of the conversation around it has darkened. Celebrity adoptions and heartwarming stories of orphans finding "forever families" in the U.S. have given way to more skeptical coverage that emphasizes the underside of international adoption: the profit motive that leads to illicit practices and the lack of regulation and oversight in a vast system that shuffles children around the world.

The American evangelical crusade for international adoption has received particularly sustained attention. Since the middle of the last decade, evangelical churches and organizations have encouraged their members to adopt children from abroad, often providing funds to help with the high costs. They have promoted a culture of adoption by publicizing a “global orphan crisis,” a disputed concept in itself. Children—many not actually orphans—are adopted hastily by well-intentioned but ill-prepared parents. Horror stories of fraud and abuse abound.

Much of this is not new. International adoption did not begin in the 1990s, or even in the 1980s. Americans began adopting children from abroad at the end of World War II, mainly from places with significant U.S. troop presences, like Germany and Japan. Systematic international adoption began in Korea after the Korean War (1950-1953) as a way to remove mixed-race “GI babies,” the children born of Korean women and foreign military personnel. It quickly grew to include non-mixed-race Korean children, and then spread to other developing countries: most notably Vietnam in the 1960s, Colombia in the 1970s, Guatemala and India in the 1980s, and China, Romania, and Russia in the 1990s.
   
Nor have evangelicals become focused on international adoption only in the last several years. A central reason why adoption from Korea became systematized was the intervention of American evangelicals. They publicized the plight of Korean children in need of rescue, lobbied Congress for the changes in immigration laws that made these adoptions possible, and promoted the practice among their friends, churches, and communities. American evangelicals considered international adoption a form of missionary work long before the megachurch leaders of the 21st century began to cultivate an adoption boom.

The current reporting on evangelical adoption overlooks the importance of race in international adoption. The vast majority of these adoptions involve white parents and non-white children. In the 1970s, as Americans faced a white baby “famine” at home, a hierarchy of desirability became firmly established in which non-white children from abroad were preferable to African-American children. These children were not white, but they weren’t black either. That they could be rescued from poverty and backwardness heightened their sentimental appeal. Today, Americans cross the color line more readily to adopt black children from African countries. Their blackness is mitigated by their perceived exoticism and victimhood.

Consumerism and humanitarianism have always been uncomfortably entangled in both domestic and international adoption. Regardless of their reasons for adopting, would-be adopters have had to venture into the marketplace in order to obtain a child. Americans long ago declared in principle that children cannot be assigned a market value, but adoptive parents, especially those adopting internationally, engage in what looks like consumerist behavior: selecting children priced according to race, sex, age, disabilities, and country of origin.

These market dynamics explain why the corruption that has captured the attention of many journalists today is not new. Baby-hunting, trafficking, and stealing; document forgery; unclear relinquishments; coercion of birth parents (usually mothers); substituting one child for another—all of these practices have shadowed international adoption since it began. The unintended consequences of international adoption also follow the practice wherever it goes, especially the vicious cycle of orphanage expansion and child abandonment, and foreign governments’ use of international adoption as a substitute for domestic child and social welfare.

Reforms to make international adoption more ethical are decades overdue. The process needs to be more stringent, uniform, and transparent. Americans also need to rethink its purpose. Should international adoption be about finding families for children, or children for families? This all-important question lies at the troubled heart of international adoption, and it demands an answer from all Americans, not just evangelicals.

Arissa Oh is an assistant professor in the history department at Boston College, where she teaches classes on race, gender, and immigration in 20th-century U.S. history. She is in the process of publishing her first book, Into the Arms of America: The Korean Origins of International Adoption.







No comments :

Post a Comment

In Small Things Forgotten Redux

No comments
Robin Fleming

Sometimes the most unimpressive objects, like this little Romano-British ceramic pot, found in Baldock, in Hertfordshire (in the UK), can speak volumes about the lives of long forgotten individuals. To appreciate the value of this pot, which was manufactured in the 4th century but still in use in the 5th, we need a little context. Although Britain in 300 CE was as Roman as any province in the Empire, within a single generation of the year 400, urban life, industrial-scale manufacturing of basic goods, the money economy, and the state had collapsed. Because of these dislocations ubiquitous, inexpensive, utterly common everyday objects––including mass-produced, wheel-thrown pots like this one––began to disappear.

The dislocations caused by the loss of such pottery were immense, and it is easy to imagine the ways the disappearance of cheap, readily available pots would have affected the running of kitchens, the rhythms of daily work and the eating of meals. But pots like our pot had also long been central in funerary rites, and the fact that they were no longer being made must have caused heartache and anxiety for bereaved families preparing for the burial of a loved one in the brave new world of post-Roman Britain.

Baldock, the site of our find––which, in the 4th century had been a lively small town with a hardworking population of craftsmen and traders––ceased in the early 5th century to be a town, and in the decades after 400 it lost most of its population. Still, a few people continued to bury their dead in the former settlement’s old Roman cemetery.

During the Roman period, a number of typical Romano-British funerary rites had been practiced here, including postmortem decapitation (with the head of the dead person placed carefully between the feet of the corpse!) and hobnail-boot burial. Most of the dead during the Roman period were placed in the ground in nailed coffins, and a number were accompanied in their graves by domestic fowl and mass-produced, wheel-thrown pots, many of them color-coated beakers like our pot.

After 400, as pottery production faltered in the region, the community burying at Baldock carried on, as best it could, with time-honored Romano-British funerary traditions. Domestic fowl and coffins (although some now partially or wholly fastened with wooden dowels rather than now scarce iron nails) continued to play starring roles in funerals; and postmortem decapitations and hobnail-boot burial persisted, as did the placing of pots at the feet of the dead.

This is where our pot comes in. It is from one of Baldock’s 5th-century post-Roman graves. It is an extremely worn color-coated beaker, mass-produced in the years before the Roman economy’s collapse, and it had to have been at least a half-century old when buried. Much of its slip-coat had rubbed off from long years of use, and its rim and base were nicked and worn with age. Although this is exactly the same kind of little beaker favored by mourners burying at Baldock in the 4th century, the appearance of one in a 5th-century grave is startlingly different, because pots as hard-worn as this are never found in Roman-period graves. This pot is an extraordinary survival, an heirloom carefully husbanded by people determined to carry on funerary practices in which their families had participated for generations, rituals that, with the collapse of industrial-scale pottery production, must have required determination and the careful preservation of whatever pots were left. It gives eloquent testimony to the lived experience of people alive during Rome’s fall in Britain, people who were trying, as best they could, to maintain the rituals and lifeways of their ancestors.

Robin Fleming, professor of history at Boston College, is a 2013 MacArthur Fellow. Her most recent book is Britain after Rome: The Fall and Rise, 400 to 1070 (Penguin, 2010). 

No comments :

Post a Comment

The Great Chicago Fire, Part 2

No comments
Mimi Cowan

In yesterday’s post I gave you the basics of Chicago’s 1871 Great Conflagration, as they called it, and how Mrs. O’Leary became everyone’s favorite scapegoat. I also promised you a story about what French socialists, women with Molotov cocktails, Mrs. O’Leary, and the creation of modern Chicago all have in common.

So here’s where the story starts: as I flipped through a series of old images of Mrs. O’Leary, I realized that she looked different in every picture.
That’s because Mrs. O’Leary hid from the press; she didn’t want anyone to sketch her likeness in the papers. As a result, illustrators were free to depict her in anyway they chose. But if these aren’t accurate representations of Mrs. O’Leary, what were the models for these images?

Turns out that these depictions of Mrs. O’Leary bear a striking resemblance to images of the pétroleuses of the 1871 French Commune.

In March 1871, the citizens' militia and city council of Paris ran the French national government and army out of the city, and then declared a socialist-style government, referred to as the Commune. After taking back several Paris neighborhoods throughout April and May, the French army began their final attack on the remaining Commune-controlled areas. There were vicious street battles, and fires broke out and burned much of the city.

According to the French press, female radicals, dubbed pétroleuses, had supposedly started many of these fires, using petroleum-filled vessels, sort of like Molotov cocktails. While historians have not found any evidence that pétroleuses actually existed, the contemporary press nonetheless depicted these women as the source of the fires that ravaged the city.

Less than two weeks before the fire in Chicago, the Chicago Tribune ran an article detailing the Parisian trial of five supposed pétroleuses. The article claimed that the women were “repulsive in the extreme, being that of the lowest, most depraved class of women . . . their clothes were sordid, their hair undressed, their features coarse, bestial, and sullen” (Chicago Tribune, September 26, 1871).

This description of the pétroleuses could be applied to the images of Mrs. O’Leary. Images of the pétroleuses and of Mrs. O’Leary share the same wrinkled, masculine features; sharp, long noses; and wild, stringy, unkempt hair.

Perhaps the people who drew Mrs. O’Leary depicted her with characteristics of a pétroleuse simply because both were accused of burning their cities down. But I think it’s more than that.

In the early 1870s Chicago’s business elite had begun to worry that their large immigrant working-class population would turn against them and overthrow the city government to establish a Commune, just as they had in Paris only weeks before the Chicago fire.

So when Mrs. O’Leary was presented as a scapegoat for the fire, illustrators were able to use her to express the deepest fears of the businessmen of Chicago: that the large immigrant working-class population might embrace the ideas of dangerous European radicals and destroy the city. The myth of Mrs. O’Leary, then, was not necessarily a condemnation of the real, live Kate O’Leary. It was a warning to Chicagoans about the threat posed by radical working-class immigrants. Beware, these images said, because they’re already among us, destroying our city, just like they did in Paris.

The irony of all this is that the 1871 fire provided a clean slate of sorts and allowed Chicago to develop into a modern industrial powerhouse in the last quarter of the 19th century. Without the opportunity for rebirth provided by the fire, this may not have occurred. In addition to the fire, however, there was one other necessary ingredient for Chicago’s industrial transformation: the presence of a large working-class immigrant population.

Perhaps Mrs. O’Leary, then, did represent what working-class immigrants would do to the city, but the illustrators got it backwards: instead of destroying it, the fire, Mrs. O’Leary, and hundreds of thousands of hard-working immigrants just like her were, in fact, the future of the city.

So, next time someone blames the 1871 Chicago fire on Kate O’Leary and her fidgety cow, you can let them know that she didn’t do it, but she’ll be happy to take the credit.





No comments :

Post a Comment

The Great Chicago Fire, Part 1

No comments
Mimi Cowan

Yesterday I told my eighty-eight-year-old grandmother I was writing a blog post about the Great Chicago Fire. She replied, "the one the cow started?"

Yup. The one the cow started. Well, actually, no. Everyone and their grandmother have blamed Chicago's biggest disaster on Mrs. O'Leary and her incendiary bovine for the past 142 years, but here's the thing:

The cow didn't do it.

But that got me thinking. Why, almost a century and a half later, is her name often the one thing people know about the fire? I've got some theories so grab a mug of milk, pull up a stool, and keep an eye on that lantern.

First, a little background: Late on Sunday October 8, 1871, a fire broke out on the west side of Chicago. Legend tells us that Catherine O'Leary placed a lantern behind the hoof of the cow she was milking. The cow kicked and the lantern broke, catching the surrounding hay on fire. Within moments, the entire barn was engulfed in flames.

Whether or not Mrs. O'Leary and her cow were at fault, there was most certainly a fire in or near the O'Leary barn that night. Thanks to an unusually dry summer, a city made of lumber, and a stiff wind, the flames spread too quickly for firefighters to control. By 1:30am the fire had engulfed the city courthouse, almost a mile from the O'Leary home, (and in the process destroying most of the city's records, to the horror of twenty-first century historians. Ahem.). Two hours later, when the city's pumping station burned down, firefighters all but gave up.

The fire raged all day Monday, consuming the downtown and north side. Fortunately, rain arrived early on Tuesday October 10, mostly extinguishing the flames. But also extinguished were the lives of at least 300 people. Additionally, about 100,000 were left homeless (about a third of the city's population), and property worth nearly $200 million dollars (somewhere between $2 and $4 trillion dollars in today's currency) was destroyed.

All this at the hands (okay, hooves) of a cow. Except, um, not. Most historians agree that it probably wasn't Mrs. O'Leary and her cow's fault. In fact, the official inquiry into the fire, way back in 1871, found Mrs. O'Leary not guilty. But somehow the myth took off anyway.126 years later, the Chicago City Council again exonerated Mrs. O'Leary and her cow (see Chicago Tribune October 6, 1997). But even today, if people know anything about the Great Conflagration of 1871 in Chicago, they know about Mrs. O'Leary and her cow.

So I started wondering: how did Mrs. O'Leary become the scapegoat and, more importantly, why?

The "how" was pretty easy to find out: on October 18, 1871 an article in the Chicago Times claimed that Mrs. O'Leary was a seventy-something-year-old Irish woman (she was actually in her late thirties or early forties) and that she had lived off handouts from the county poor relief board for most of her life (also false; she helped support her family of seven with a small milk business). When she was denied aid one day, the article explained, Kate O'Leary swore revenge on the city and later enacted this revenge with the careful placement of a lantern.

Despite the fact that just about the only thing that was correct in this article was Mrs. O'Leary's name, it was the root of what turned out to be a lifetime of shame and ostracizing for Kate O'Leary.

I still wanted to know why this myth stuck if it wasn't the truth. Turns out that the answers tell us more about late 19th-century Chicago than they tell us about Kate O'Leary. Tomorrow I'll tell you what French socialists, women with Molotov cocktails, Mrs. O'Leary, and the creation of modern Chicago all have to do with one another.

But here's a spoiler: if I were Kate, I'd take credit for it after all.

As a young kid Ms. Cowan loved history, except Chicago history. And immigration history. And labor history. After spending her twenties working at some of the nation's top opera companies, she changed careers, enrolled in grad school, and discovered her passion for all things historical; especially Chicago, immigration, and labor history. She has been a resident of the City of Big Shoulders for the last four years while researching and writing her (Boston College) doctoral dissertation on Irish and German immigrants in 19th-century Chicago.

No comments :

Post a Comment

A Divided Kentucky

No comments
Aaron Astor

In 1926 historian E. Merton Coulter described Kentucky during the Civil War as a “crouching lion, stretched east and west . . . the thoroughfare of the continent.” Kentucky served as more than the geographic fulcrum of the Union between slave and free states. It also represented the political heart of the Union, its politicians, most notably Henry Clay, having offered Union-saving compromise measures since 1820. It had long embraced conservative Unionism, a political tradition that understood “conservative” both as a social orientation (allowing for only gradual change to the institution of slavery) and a political relationship (with the Union buttressing the social order).

Militarily, Kentucky was absolutely critical to the Federal war effort, as the Ohio River would have become an international boundary if Kentucky had joined the Confederacy. Further, tens of thousands of Indianans, Illinoisans, and Ohioans traced their roots to Kentucky, including President Lincoln himself. Capturing the delicate but essential position of his native state, Lincoln commented that to “lose Kentucky is nearly the same as to lose the whole game.” After Fort Sumter, Kentuckians hoped to stay out of the war altogether, declaring the state “neutral” and pledging support for neither belligerent. But when Confederate General Leonidas Polk invaded the southwestern tip of the state in the beginning of September 1861, the legislature immediately declared its support for the Union.

What followed was a troubled partnership between a conservative, pro-slavery Unionist coalition within Kentucky and a Federal political and military engine steadily moving toward emancipation as a central war aim. The Emancipation Proclamation did not apply in Kentucky (since it was not in rebellion), and military officials made special efforts not to alienate the state’s civilian leadership. The breaking point came with black soldier enlistment in 1864, when 57% of black male Kentuckians of military age entered the Union army—the highest percentage, by far, of any slave state. The result was a dual rebellion in Kentucky: white Confederates rejected the “Union” part of conservative Unionism, and the slave population rejected the “conservative” part. By the end of 1865, eight months after Appomattox, Kentucky was the last state (along with Delaware) to maintain slavery, requiring the Thirteenth Amendment to bring the institution to an end. Legions of former conservative Unionists welcomed their ex-Confederate foes back to the state with open arms. And the state quickly developed a belated Confederate identity that would provide a rubric of resistance against Radical Reconstruction.

To understand how Kentucky’s slaves converted a pro-slavery Union that many of their masters supported into an anti-slavery republic, we must consider slave attitudes before 1861. Kentucky slaves who rebelled against the old social order drew upon decades of resistance against a small-scale but tenacious slave system. Networks of fugitive slaves, repeated insurrectionary threats, defiant acts in defense of the slave community and family, and everyday forms of resistance made for a slave population that was politically aware of the stakes of civil war in 1861. Slave actions exacerbated tensions within the Ohio Valley, especially in neighboring free states where the Fugitive Slave Act was never popular. Alas, divisions between anti-slavery Midwesterners and pro-slavery Kentuckians—and between pro-slavery Kentuckians over the best means to support the slave-based social order—led to a complicated form of border state warfare by late 1861. Slaves understood that divisions within the master class created an opportunity for freedom, and they positioned themselves to exploit the opening brought by civil war.

This post draws from Aaron Astor, “The Crouching Lion's Fate: Slave Politics and Conservative Unionism in Kentucky,” Register of the Kentucky Historical Society 110 (2012): 293-326, which won the 2013 Richard H. Collins Prize from the Kentucky Historical Society.

No comments :

Post a Comment

Give Me a Break: Kit Kats and the Gilded Age

No comments
Steven Cromack

I just completed teaching a unit on the Gilded Age. Information wise, the Gilded Age can be soporific—railroads, oil, Stalwarts, Mugwumps, and Half-breeds. Who cares, especially if you are 16 and have just gotten your learner’s permit? Instead of teaching content, therefore, I decided to teach concepts. I took a risk, and, as a result, a classroom experiment that could have gone horribly awry not only intrigued students, but also forced them to reflect on their roles as citizens and to face their own sense of morality.

At the heart of the Gilded Age was the question of wealth. What, if anything, do the rich owe society? I began one class by having students choose a card from a prearranged deck stacked with twos, threes, and fours and four kings. I then gave each student the number of Hershey kisses on their card. The kings, however, got “King Size” Kit Kats. There was outrage. I immediately had the students write and reflect. How did it feel to have the wealth of the classroom concentrated in the hands of a few? Their answers included discussions of fairness, chance, and justice. These were the themes discussed and debated during the Gilded Age by the Populists, millions of immigrants, the wealthy, and Theodore Roosevelt.

Homework assignments forced students to reflect on deep philosophical questions: make them squirm and think. Here is one example :

Is it just that you live in a house, apartment, or condominium while another lives in a refugee camp? Is it just that you have food in excess while others are starving? If there is indeed injustice in the world, who is responsible for its rectification? 

One can use Jacob Riis to teach empathy, Michael Moore to teach muckraking, The Wonderful Wizard of Oz to teach happiness, and the abuse and personal strength of Julia Ward Howe to hook students as to why history really, really, really matters. In the end, all of the content humanized history.

These assignments and classroom topics were unlike anything students had experienced before. Their reflections of the unit and its content were overwhelmingly positive—they liked concepts versus facts and dates. For the students, this approach was far more useful. Here is what one of them wrote:

“I feel it is only this year that I’ve realized the importance of our past in connection to our future. Though I have been attending public school for eleven years now, Mr. Cromack is the first teacher to start dissecting terms such as: injustice, fairness, citizen, and responsibility. The discussions are rigorous, the class engaging, and the material is suddenly compelling.”

No comments :

Post a Comment

Will Blog Posts and Tweets Hurt Junior Scholars? Part 2

No comments
Heather Cox Richardson

Untenured scholars are in a funny place: that gap between the old world and the new. Ten years ago, yes, blogging would convince many senior scholars that a junior person was not a serious academic because s/he was catering to a popular audience. Since then, the old world of the academy is crumbling, and while many departments have not yet caught up, others are aware they must move into the twenty-first century.

So will blog posts and tweets hurt your career? Maybe. But they can also help your career in very practical ways.

The first has to do with publishing. The gold standard for employment and for tenure remains a published book. When most senior scholars finished their doctorates, it was almost guaranteed that their dissertations would find academic publishers. In those days, university presses had standing contracts with university libraries that guaranteed automatic sales of a few thousand copies of each monograph that came out from a reputable press. Budget cuts over the last twenty years killed this system. No longer can an academic press be certain that libraries will buy their monographs. This means that they can’t accept everything that comes over the transom, making it harder than ever to get a book contract.

But you still need one to get tenure.

One of the ways to improve your chances of landing that contract is for you to make sure you have written a book that is of wider interest than to those in your immediately area of interest, one that a press thinks it will be able to sell. How can you do that? Engage with a wider audience on-line. Listen to questions. See which of your posts get a strong response. Are they written differently than your other work? Are they asking different questions? What does this tell you about your argument and your writing style? How can you speak more clearly to what is, after all, a self-selected audience of interested people?

Contracts also depend increasingly on your own networks. Do you have standing because you contribute to a popular blog? Are there lots of people who like to follow what you have to say? That will help convince a publisher that you’re worth a hearing.

An on-line presence might speak to an employer more directly. Blogging gives you an opportunity to present yourself on your own terms. Any diligent search committee will google you. A series of interesting blogs about teaching, for example, will never hurt your profile.

There are pitfalls to an on-line presence, of course. First of all, and above all, it’s important to remember that the very act of on-line work means your opponents can’t respond, and it’s unsporting, at best, to launch a tirade against someone who can’t answer. For the job market, this means it’s crazy to write intemperately about anyone or anything. This is a very small profession, and even if XYZ’s work infuriates you, there is no reason to call it out. XYZ will certainly have good friends at any institution at which you might interview, and they will not forget you have taken a pot shot (they googled you, remember?).

The exception to this rule, of course, is that if you feel strongly that you must take a stand either for or against something on principle, do it proudly and openly. And be prepared to defend your stand against opponents. Just don’t pick fights gratuitously.

On Twitter, the rules are like Facebook. Don’t be an idiot. Don’t post about how much you hate your students, or your colleagues, or any of the obvious rants that will ruin you with a committee. Don’t post endless self-absorbed pieces about what you’re eating or drinking or saying or thinking. But Twitter and Facebook are not just danger zones; they can also reflect you well. I follow a number of junior scholars on Twitter who are obviously tightly linked to their communities and to new scholarship, and who are struggling with really interesting intellectual issues. If one of them applied to my school, their Twitter presence would make them stand out.

The other major pitfall is that you cannot let your on-line presence keep you from producing more traditional scholarship. Blog and tweet, yes, but make sure those contributions to knowledge reflect and/or point back to your larger body of work. No search committee is going to consider a blog equivalent to a manuscript, but it very well might like to see a blog that augments the rest of what you do. Just don’t let on-line work suck all your time.

Here’s a newsflash: The internet is here to stay. The profession hasn’t yet caught up with its implications, but it must, and soon. Today’s junior scholars are in a vague zone between the past and the present, but that same vagueness offers them a great opportunity to shape the way historians use the world’s revolutionary new technologies.

No comments :

Post a Comment

The Battle of Chickamauga at 150 and Teaching with Civil War Reenactments

No comments
Lisa Clark Diller

I will admit to having been a re-enactment virgin until the weekend of September 21, 2013.  As readers of this blog are well aware, we are in the midst of all things Civil War in the United States. Chattanooga, Tennessee, is marking its own big battles all this fall. 
Specifically, the engagements at Chickamauga occurred 150 years ago, September 19-20.    As someone whose research reflects a great deal on another civil war (one in England in the 1640s), I have tended to smile blithely through local history enthusiasts’ explanations of the Confederacy, the Union, and the role played by East Tennessee in that conflict.

However, as a teacher of a first year seminar who is always looking for the required “bonding experience” for my students, this year it seemed appropriate to participate in some local history.  I don’t think this was one of the most effectively executed re-enactments (others with more experience have confirmed this opinion).  But the weather was lovely, the setting beautiful, and my students seemed to have a good time.

It made me start thinking about the role of such events as educational opportunities.  What is the purpose of re-enactments of battles for historical education? The re-enactments seem to me to be a bit different than the interpretations offered in museums and on walking tours. Those of you who study public history can perhaps straighten me out on this. I can guess why the people participating might be enjoying themselves. I can see why communities might want to watch them. But when it comes to serving the goals of education—what is going on here?  I am specifically thinking about the “so what?” of history.  I quizzed my students before and after the event regarding what they thought this experience revealed of the “so what?” of historical thinking and skill-building.  Here are some of their comments:

1. Reenactments remind people who live in the area—and even those who don’t attend and only see advertisements—that these events happened.  (The pessimism/reality check of my students regarding popular historical literacy was startling.)

2. The material culture of the past is the big thing these living history/re-enactments provide.  It was sobering to my students to think of the actual situation of people who lived/fought in the nineteenth century.  It made them more sympathetic to people whose ideas they encounter in texts.

3. Patriotism was re-enforced.  We had a conversation about what kind of patriotism reenacting battles might be emphasizing, but I’ll leave that conversation to the reader’s imagination.

4. War is ugly.  They didn’t seem to think that this would mean we would no longer fight wars, but they liked the reminder that this isn’t something glorious. (Still, during this particular reenactment, it didn’t seem that anyone felt the need to portray death—there was a striking lack of loss among the ranks as they advanced and retreated).

These are not the most nuanced observations, but my own experience is so thin that I’m sure I lost teaching moments over the course of the day.  Perhaps I can blame the poor quality of the event itself. 

Fellow HS blogger Eric Schultz’s experience at Gettysburg was much richer—and his description of all the learning opportunities available to people visiting the park reflect the best of what our National Park Service has to offer.  Since reenactments aren’t allowed in the park itself, and this event took place a good 45-minute drive away from Chickamauga battlefield, the observers here weren’t able to easily take advantage of all the resources the NPS offers.

I am interested in what readers of this blog think is useful about military reenactments in terms of pedagogy or historical thinking.  I realize work has been done on the culture of reenactment itself (see here and here), so I’m not thinking as much of the actual participants.  But how can we use the widespread and deep interest in this phenomenon to teach some of the skills of historical thinking?  How much preparation might our students need ahead of time?  Are there usually interpreters explaining what is happening in terms of military strategy, etc, as the visitors watch the efforts of the reenactors?  What experiences have the rest of you had?

No comments :

Post a Comment