The 2008 Obama Campaign and Online Advertising

There have been a number of recent pieces looking at online political advertising during the 2012 campaign.  I have written a bit about both the innovations in merging voter files with online usage data and its democratic implications, as well as the limitations of big data, but I wanted to go back here to the 2008 cycle to show the organizational contexts within which staffers deploy online advertising.  My hope is that understanding what the Obama campaign did in 2008 can help us ask better questions about what is going on during this cycle and its potential consequences.  As with my previous posts, all interview data presented here is drawn from my forthcoming book.

In 2008, the campaign’s new media division developed an extensive online advertising program.  The campaign housed the operations within the division, a decision that was the product of director Joe Rospars’s early negotiations for organizational jurisdiction.  Rospars also decided that all the online advertising would be handled in-house, which meant that rather than relying on outside vendors Obama staffers produced all of their ads and negotiated their placement through advertising networks, saving the campaign a significant amount of money.  During the primaries, Michael Organ served as the initial, full-time director of internet advertising.  For the general election, Andrew Bleeker took over this role.  Bleeker had served in a similar role for the Clinton campaign during the primaries, and had formerly worked for the Kerry campaign and MSHC Interactive after the 2004 election.

The Obama campaign had three primary objectives for its online advertising.  The first was to build a robust supporter base, the metrics for the success of which included signups to the campaign’s email list and online fundraising. Second, there was mobilization, which entailed a cluster of related advertising activities around voter registration, early voting, polling and caucus location look ups, get-out-the-vote operations, and volunteer recruitment to targeted demographic groups and individuals.  The third was persuasion, which involved advertising that delivered information about initiatives designed to appeal to groups of individuals the campaign profiled as undecided.  Persuasion accounted for the majority of the campaign’s online advertising expenditures.  As the scope of these goals suggests, internet advertising was integrated into all of the division’s activities.

The campaign measured its progress towards these goals by generating data and continuously evaluating metrics on the effectiveness of its online advertising.  Online advertising is a “closed loop”; staffers instantly knew responses to ads through data on ‘click throughs.’  Tracking these click throughs in real time enabled staffers to continuously measure the outcomes and calculate the returns on investment (ROI) of all its online advertising.  Based on this data, the campaign’s online advertisers developed a working ‘social-psychology of browsing’ to underlay their practice, crafting appeals, testing graphics, making allocative resource decisions, and reformulating goals based on user actions.

For example, developing the proper metrics, or measures of effectiveness, for online advertising was a central part of staffers’ work.  To do so, staffers had to clearly specify the outcomes they desired for the three goals cited above: building the supporter base, persuasion, and mobilization.  With respect to building the supporter base, advertising goals included generating sign-ups for the campaign’s email list, driving traffic to the website, and donations.  Persuasion metrics included the number of click throughs to applications designed to appeal to undecided voters, such as the ‘tax cut calculator’ that enabled individuals to calculate how much money they would save under Obama’s proposed tax cut plan.  The Division used online advertising to heavily promote the calculator, targeting a wide audience on general interest online news sites.  Other metrics for the campaign’s persuasion advertising entailed click through rates on targeted issue advertising.

The metrics around mobilization were more complicated given the need to target groups and individuals by state for electoral purposes.  Online advertising staffers closely collaborated with field staffers around these initiatives.  The campaign’s field division actually provided the funding for the mobilization advertising.  Director Jon Carson was an advocate of online advertising, in large part because it was more cost effective and had greater reach for some key electoral goals such as registering voters than hiring field staffers.  As Bleeker explains:

“We were scalable and efficient and he [Carson] knew the cost of doing things offline, so he was the one who advocated doing things online. In a lot of cases we can do more of it and more cheaply then you can do door-to-door.”

For new media staffers, this meant using online ads to capture email addresses, recruit volunteers, register voters, provide supporters with information on their polling locations, and turn them out on election day.

The key to all of these activities was mobilizing only those individuals likely to be supporting the candidate.  While the data infrastructure and analytic practices of modeling voters has a long history (for further details, see Rasmus Nielsen’s excellent new book Ground Wars), in brief the campaign’s modeling firm Strategic Telemetry began their work by taking a poll of a random, representative sample of the electorate. Based on the candidates whom these polled voters supported, consultants then worked backward to find correlated variables for Obama supporters and undecideds. As a central figure in the Democratic Party’s data efforts over the last decade described: “We throw a ton of stuff in the black box, and it spits out which things have correlations.”

Strategic Telemetry then created combinations of these correlated variables that corresponded to the characteristics of the candidate’s supporters and undecideds. The campaign’s data consultants then layered these models onto the electorate using voter file data, which contains information on every member of the electorate from a host of sources.  The core of these databases (which are owned by both party organizations and commercial firms such as Catalist, all of which the campaign used) are public data collected from local, state, and federal records, which include information such as party registration, voting history, political donations, vehicle registration, and real estate records. This data is supplemented with commercial information such as magazine subscription records, credit histories, and even grocery “club-card” purchases.  Both political parties, as well as a host of commercial firms, have also amassed enormous national voter databases that they maintain and provide as a service to candidates from mayoral to presidential races and that contains this information along with a historical record of voter contacts across electoral cycles.

Layering these models onto the voter file enabled the campaign to generate a composite score of likely support for Obama on a 0–100 scale for every member of the electorate. The consultants then continually polled and incorporated the results of field canvasses to test the accuracy of and update its models.  This approach to voter modeling helped the campaign to better identify its supporters and those leaning in the candidate’s direction to target its fieldwork and online advertising.

In the context of online advertising, new media staffers, working in consultation with field staffers, identified target demographics in each battleground state.  Staffers wanted to avoid spending money on online ads that could boost John McCain’s turnout, so online advertising was allocated towards sites popular with younger, African American, and Latino voters.  Staffers also used the geo-location targeting made possible by IP addresses to display ads to individuals residing in areas that had high concentrations of Democratic voters and favorable demographics.  The campaign also ran targeted advertising to voter groups and individuals through cookies as well as purchased America Online and Yahoo user data.  The latter were initial steps at developing individual-specific advertising – the expansion of which is driving online advertising this campaign cycle. As a central figure in online advertising in Democratic Party politics over the last decade, Michael Bassik, describes:

“In 2008 Yahoo! partnered with Catalist to do a merge of the Catalist data and the Yahoo! data so that individual organizations could advertise just to match segments and ‘look-a-like’ segments.  For example, say Yahoo! has a list of 100,000 people and Catalist has a list of 100,000 people and they find 20,000 people in common.  Yahoo! then also finds other people within their ‘network group’ that has the same sort of behavior and tries to get a match, so that is the ‘look-a-like’ audience.  And then organizations were invited through this relationship between Catalist and Yahoo! to advertise just for Democrats, just to Republicans, just to independents that type of thing. Yahoo! provided data back to an independent third party organization in terms of who saw the ads personally, which ads they saw, who clicked, and then they did phone polling to identify whether or not exposure to the ads moved perceptions.”

In addition, Facebook was a new advertising vehicle.  The commercial social networking service provided a wealth of new ways to target groups of voters.  These ads were based on a ‘cost per click’ model, where the campaign only pays when an individual sees an ad and clicks on it.  On Facebook, the campaign targeted advertising based on a host of different characteristics revealed on voters’ profile pages, from political persuasion and religion to hobbies.

To track the effectiveness of the mobilization advertising campaigns, staffers generated ROIs for all of their advertisements and compared them with other communications.  For example, to determine where to allocate resources to register voters through the online voter registration tool ‘Vote For Change’, the campaign began by weighting variables relating to the field plan, priorities of the state organizations, the rules of each primary, the targeted demographics, and the requirements for capturing data.  Staffers then ran trials and assessed the performance of the online tools at its disposal to help meet the goal of registering voters, determining the relative effectiveness of email versus online advertising for each different category of voters.

The campaign also used innovative means to track the effectiveness of online advertisements in this area.  Staffers ran a series of experimental trials, for instance, to assess whether an individual responding to an online advertisement for Vote For Change actually registered to vote.  They did so by matching the names of individuals who clicked through the ad and signed up online to published lists of new voters issued by secretaries of state.

Meanwhile, staffers tracked the ROIs for particular ads over time, such as thirty or sixty days, and for a range of actions.  Looking at these ROIs enabled staffers not only to find optimal content and placement, but to follow their performance over time and for a range of possible actions.  Staffers looked at the effectiveness of ads on many levels, such as whether individuals responding to ads to look up their polling place also donated to the campaign.  For example, if an individual clicks on an ad and signs up for the email list, staffers followed their actions over the ROI time period to see if they took other actions such as donating, volunteering, or hosting an event.  Staffers then calculated how many more sign-ups or polling place look ups would happen with each additional dollar invested in advertising, which then shaped how the Division allocated its funds.  For example, Rospars cites how the campaign knew:

“whether our online ad resulted in that person voting absentee or requesting a ballot, and then we also know if that person stays on the e-mail list and winds up donating or goes on to a volunteer activity.  So we can measure our ROI for the ad and make all sorts of choices about where to run online ads, and how to deal with our budget through lots of very complicated assessments of our return on investment financially and from a volunteer perspective.”

Organizational priorities, in turn, shaped what counted as ‘maximizing returns‘ from online advertising.  In the context of fundraising advertisements, sometimes the campaign was more than willing to only get back an estimated fifty cents on the dollar for every ad that it ran.  This was the case if staffers knew that the ad reached people the campaign could not contact through its other outreach efforts or if the division prioritized signups to the email list and not financial contributions.  Importantly, though, the Division always made these decision based on analysis of data. All of which meant that the online advertisers knew what their work accomplished and took a great point of pride in their conviction that they spent the campaign’s money well.  Division staffers, for instance, often cited how more people looked up their polling place online during the general election than provided the margin of Obama’s victory.

Comments are closed.