viernes, noviembre 23, 2012

Segmentación 2.0: Ideas y variables blandas

En la Web 2.0 las conexiones entre personas, ya sea de forma más o horizontal y simétrica, o más vertical y asimétrica, se da no tanto por los vínculos definidos por su calidad y especificidad , sino por las ideas.
Por Rubén Weinsteiner

 




Rubén Weinsteiner     


Enviar Enviar

Imprimir Imprimir

Corrección Sugerir Corrección

Comentarios Escribir Comentarios

Anterior ¿Como medir la tasa de retorno 2.0?

Siguiente Darwin 2.0: Los paranoicos sobreviven pero los pronoicos tienen éxito
Technorati Yahoo

Nos conectamos por las cosas que nos preocupan, que nos impactan, que nos duelen, que nos emocionan, que nos apasionan.

Las conexiones en la Web 2.0 se dan más por variables blandas que por variables duras. La organicidad de esas conexiones- comunicaciones, se da más por lo que sienten las personas, lo que piensan, que creencias y valores tienen, que por quienes son, donde están, qué edad tienen y como viven.

Los motores transaccionales en la Web 2.0, tienen que ver con esas cosas que nos interesan y que nos impactan. Y nos impactan muchas cosas, por eso nos vamos relacionando con personas muy diferentes a nosotros, y a su vez esas personas encuentran puntos de contacto con personas muy diferentes a ellas y a nosotros, que a su vez pueden hacer contacto con nosotros, a través de otros marcos temáticos de convergencia.
Esta dinámica es una de las grandes disruptividades de la Web 2.0, y todavía los líderes, los decisores, los generadores de contenidos, no reconocen esa dimensión.

Hay mucho enfoque en la cantidad de seguidores en Twitter o en Facebook, en la menciones en los buscadores de un blog o de un sitio, o en la autoridad que construyen emisores en sus espacios, al ser levantados por una gran cantidad de amplificadores.
La mirada no apunta en la mayoría de las estrategias 2.0, en ver cuánto y cómo nos conectamos con la gente a través de los que realmente les importa.

La segmentación por variables blandas, nos permite conectar con la especificidad y organicidad de cada segmento, e interpelar en forma directa a las personas, unidas por creencias, valores, ideas, pasiones y actividades. Mas por lo hacen, que por lo que son.

Esto cambia el juego, y el abordaje quirúrgico micro segmentado, les permite a las empresas, a los líderes políticos y a las organizaciones en general, mapear y segmentar por marcos temáticos de pertenencia y pertinencia, para establecer comunicaciones intensas con personas muy heterogéneas, vinculadas entre sí por variables blandas y nuevas para el análisis.

www.weinsteiner..net
Rubén Weinsteiner

 



 

sábado, noviembre 10, 2012

The Data Crunchers Who Helped Obama Win

In late spring, the backroom number crunchers who powered Barack Obama’s campaign to victory noticed that George Clooney had an almost gravitational tug on West Coast females ages 40 to 49. The women were far and away the single demographic group most likely to hand over cash, for a chance to dine in Hollywood with Clooney — and Obama.
So as they did with all the other data collected, stored and analyzed in the two-year drive for re-election, Obama’s top campaign aides decided to put this insight to use. They sought out an East Coast celebrity who had similar appeal among the same demographic, aiming to replicate the millions of dollars produced by the Clooney contest. “We were blessed with an overflowing menu of options, but we chose Sarah Jessica Parker,” explains a senior campaign adviser. And so the next Dinner with Barack contest was born: a chance to eat at Parker’s West Village brownstone.

For the general public, there was no way to know that the idea for the Parker contest had come from a data-mining discovery about some supporters: affection for contests, small dinners and celebrity. But from the beginning, campaign manager Jim Messina had promised a totally different, metric-driven kind of campaign in which politics was the goal but political instincts might not be the means. “We are going to measure every single thing in this campaign,” he said after taking the job. He hired an analytics department five times as large as that of the 2008 operation, with an official “chief scientist” for the Chicago headquarters named Rayid Ghani, who in a previous life crunched huge data sets to, among other things, maximize the efficiency of supermarket sales promotions.
Exactly what that team of dozens of data crunchers was doing, however, was a closely held secret. “They are our nuclear codes,” campaign spokesman Ben LaBolt would say when asked about the efforts. Around the office, data-mining experiments were given mysterious code names such as Narwhal and Dreamcatcher. The team even worked at a remove from the rest of the campaign staff, setting up shop in a windowless room at the north end of the vast headquarters office. The “scientists” created regular briefings on their work for the President and top aides in the White House’s Roosevelt Room, but public details were in short supply as the campaign guarded what it believed to be its biggest institutional advantage over Mitt Romney’s campaign: its data.
On Nov. 4, a group of senior campaign advisers agreed to describe their cutting-edge efforts with TIME on the condition that they not be named and that the information not be published until after the winner was declared. What they revealed as they pulled back the curtain was a massive data effort that helped Obama raise $1 billion, remade the process of targeting TV ads and created detailed models of swing-state voters that could be used to increase the effectiveness of everything from phone calls and door knocks to direct mailings and social media.

How to Raise $1 Billion
For all the praise Obama’s team won in 2008 for its high-tech wizardry, its success masked a huge weakness: too many databases. Back then, volunteers making phone calls through the Obama website were working off lists that differed from the lists used by callers in the campaign office. Get-out-the-vote lists were never reconciled with fundraising lists. It was like the FBI and the CIA before 9/11: the two camps never shared data. “We analyzed very early that the problem in Democratic politics was you had databases all over the place,” said one of the officials. “None of them talked to each other.” So over the first 18 months, the campaign started over, creating a single massive system that could merge the information collected from pollsters, fundraisers, field workers and consumer databases as well as social-media and mobile contacts with the main Democratic voter files in the swing states.
The new megafile didn’t just tell the campaign how to find voters and get their attention; it also allowed the number crunchers to run tests predicting which types of people would be persuaded by certain kinds of appeals. Call lists in field offices, for instance, didn’t just list names and numbers; they also ranked names in order of their persuadability, with the campaign’s most important priorities first. About 75% of the determining factors were basics like age, sex, race, neighborhood and voting record. Consumer data about voters helped round out the picture. “We could [predict] people who were going to give online. We could model people who were going to give through mail. We could model volunteers,” said one of the senior advisers about the predictive profiles built by the data. “In the end, modeling became something way bigger for us in ’12 than in ’08 because it made our time more efficient.”
Early on, for example, the campaign discovered that people who had unsubscribed from the 2008 campaign e-mail lists were top targets, among the easiest to pull back into the fold with some personal attention. The strategists fashioned tests for specific demographic groups, trying out message scripts that they could then apply. They tested how much better a call from a local volunteer would do than a call from a volunteer from a non–swing state like California. As Messina had promised, assumptions were rarely left in place without
numbers to back them up.The new megafile also allowed the campaign to raise more money than it once thought possible. Until August, everyone in the Obama orbit had protested loudly that the campaign would not be able to reach the mythical $1 billion fundraising goal. “We had big fights because we wouldn’t even accept a goal in the 900s,” said one of the senior officials who was intimately involved in the process. “And then the Internet exploded over the summer,” said another.
A large portion of the cash raised online came through an intricate, metric-driven e-mail campaign in which dozens of fundraising appeals went out each day. Here again, data collection and analysis were paramount. Many of the e-mails sent to supporters were just tests, with different subject lines, senders and messages. Inside the campaign, there were office pools on which combination would raise the most money, and often the pools got it wrong. Michelle Obama’s e-mails performed best in the spring, and at times, campaign boss Messina performed better than Vice President Joe Biden. In many cases, the top performers raised 10 times as much money for the campaign as the underperformers.
Chicago discovered that people who signed up for the campaign’s Quick Donate program, which allowed repeat giving online or via text message without having to re-enter credit-card information, gave about four times as much as other donors. So the program was expanded and incentivized. By the end of October, Quick Donate had become a big part of the campaign’s messaging to supporters, and first-time donors were offered a free bumper sticker to sign up.

Predicting Turnout
The magic tricks that opened wallets were then repurposed to turn out votes. The analytics team used four streams of polling data to build a detailed picture of voters in key states. In the past month, said one official, the analytics team had polling data from about 29,000 people in Ohio alone — a whopping sample that composed nearly half of 1% of all voters there — allowing for deep dives into exactly where each demographic and regional group was trending at any given moment. This was a huge advantage: when polls started to slip after the first debate, they could check to see which voters were changing sides and which were not.
It was this database that helped steady campaign aides in October’s choppy waters, assuring them that most of the Ohioans in motion were not Obama backers but likely Romney supporters whom Romney had lost because of his September blunders. “We were much calmer than others,” said one of the officials. The polling and voter-contact data were processed and reprocessed nightly to account for every imaginable scenario. “We ran the election 66,000 times every night,” said a senior official, describing the computer simulations the campaign ran to figure out Obama’s odds of winning each swing state. “And every morning we got the spit-out — here are your chances of winning these states. And that is how we allocated resources.”
Online, the get-out-the-vote effort continued with a first-ever attempt at using Facebook on a mass scale to replicate the door-knocking efforts of field organizers. In the final weeks of the campaign, people who had downloaded an app were sent messages with pictures of their friends in swing states. They were told to click a button to automatically urge those targeted voters to take certain actions, such as registering to vote, voting early or getting to the polls. The campaign found that roughly 1 in 5 people contacted by a Facebook pal acted on the request, in large part because the message came from someone they knew.

Data helped drive the campaign’s ad buying too. Rather than rely on outside media consultants to decide where ads should run, Messina based his purchases on the massive internal data sets. “We were able to put our target voters through some really complicated modeling, to say, O.K., if Miami-Dade women under 35 are the targets, [here is] how to reach them,” said one official. As a result, the campaign bought ads to air during unconventional programming, like Sons of Anarchy, The Walking Dead and Don’t Trust the B—- in Apt. 23, skirting the traditional route of buying ads next to local news programming. How much more efficient was the Obama campaign of 2012 than 2008 at ad buying? Chicago has a number for that: “On TV we were able to buy 14% more efficiently … to make sure we were talking to our persuadable voters,” the same official said.
The numbers also led the campaign to escort their man down roads not usually taken in the late stages of a presidential campaign. In August, Obama decided to answer questions on the social news website Reddit, which many of the President’s senior aides did not know about. “Why did we put Barack Obama on Reddit?” an official asked rhetorically. “Because a whole bunch of our turnout targets were on Reddit.”
That data-driven decisionmaking played a huge role in creating a second term for the 44th President and will be one of the more closely studied elements of the 2012 cycle. It’s another sign that the role of the campaign pros in Washington who make decisions on hunches and experience is rapidly dwindling, being replaced by the work of quants and computer coders who can crack massive data sets for insight. As one official put it, the time of “guys sitting in a back room smoking cigars, saying ‘We always buy 60 Minutes’” is over. In politics, the era of big data has arrived.

Rubén Weinsteiner

sábado, noviembre 03, 2012

Evolución del empleo, una mirada


Los atributos de consumo y empleo constituyen dos de los pilares de la fortaleza del modelo iniciado en mayo de 2003. Concurren para sostenerlos muchas alternativas de gestión y desiciones de política económica heterodoxas, la mayoría criticadas severamente por los gurúes que, con sus saberes llevaron al país a 24% de desempleo, 54% de pobreza y 27,6% de indigencia en la crisis del neoiliberalismo en el año 2001. 
Adicionalmente ya observamos en Ramble que en líneas generales, empleo y consumo son los motores del acompañamiento electoral contundente de 2011, a punto que el 50% de los votos obtenidos por el FPV son volátiles y consecuencia directa de la robustez de ambos indicadores observadas desde 2003 y en particular , su recuperación tras la caída de 2009. 

Para observar la marcha de la generación de empleo y su desagregado por rama , así como la perspectiva de este singular indicador de cara a 2013, leemos en Analytica una visión de la coyuntura. 

La actividad económica lentamente gana velocidad pero el empleo sigue muy rezagado. El frenazo del primer semestre está condicionando uno de los principales pilares de expansión del mercado interno. El desempleo no aumenta significativamente, pero tampoco baja, lo que empieza a reflejar ciertos límites del actual modelo.
Un repaso de los principales indicadores laborales da cuenta de esta realidad. Las suspensiones y despidos de trabajadores (medidos por Tendencias Económicas) durante el tercer trimestre han cedido respecto del turbulento segundo trimestre, pero todavía se ubican muy por encima de 2011. A diferencia de lo ocurrido en la crisis 2009 los despidos ahora son sistemáticamente inferiores a las suspensiones, lo que refleja la percepción empresaria de que se trata de un fenómeno más transitorio que permanente.
Igual lectura generan los datos de ocupación en la industria, ya que las horas trabajadas caen (+1,9% en el tercer trimestre) pero el empleo sigue aumentando (+1,2%), aunque a un ritmo cada vez más bajo. 
La EPH (Indec) da cuenta de una suba marginal de la desocupación entre el primer y segundo trimestre (de 7,1% de la PEA a 7,2%) pero refleja un aumento significativo en la población subocupada (de 7,4% a 9,4%). En otros términos, hay más personas ocupadas que podrían trabajar más horas y no lo puede hacer. 
La lectura es coincidente.
En el caso de una reciente encuesta elaborada por Manpower este enfriamiento en la demanda de empleo se mantiene. Según el estudio, sólo 12% de las empresas espera aumentar su dotación en 2013, mientras 9% proyecta una caída y 78% no espera cambios. El saldo neto positivo (+3%) es el peor desde la crisis 2009 y se ubica 14 puntos por debajo del registro del cuarto trimestre del año pasado.
La encuesta indica que en 4 de las 6 regiones del país se prevén incrementos en las dotaciones. Los planes de contratación más optimistas pueden observarse en la Patagonia (+22%) y en la región del NEA (+11%). Las proyecciones para el NOA (+7%) y el AMBA (+5%) son mucho más modestas. En tanto, los empleadores pronostican reducciones en Cuyo (-4%) y la región Pampeana (-2%).
En cuanto a la tendencia, es interesante notar que las expectativas para la Patagonia son las mejores de los últimos cuatro años, lo que ratifica la proyección de crecimiento que tiene la industria petrolera a partir del nuevo impulso generado por YPF. Para las restantes regiones la tendencia es preocupante: las expectativas se ubican en los mínimos post crisis 2009.
A nivel sectorial, la lectura es muy consistente con los tiempos que corren. Entre los más segmentos mas dinámicos en materia de creación de empleo se ubica la Administración Pública y Educación (+9%), seguida por servicios (+8%) y Manufacturas (+6%). El Estado se ha convertido en un demandante activo, el sector servicios se beneficia del sesgo pro consumo y la industria parece estár un poco más pujante de cara a 2013.
Un peldaño por debajo se ubican Comercio y Transporte y Servicios públicos (+5%). Los más golpeados son Finanzas, Seguros y Bienes Raíces (+2%), Agricultura y pesca (0%) y Minería y Construcción (-4%). Era previsible, el sector inmobiliario tiene que achicarse para sobrevivir en un entorno de muchos menos negocios y la construcción no termina de asimilar la falta de dólares.
El problema de fondo es que en todos los sectores las perspectivas para el cuarto trimestre se ubican en los niveles más bajos desde la recuperación post Lehman.
En síntesis, no estamos viendo un proceso de destrucción de empleo generalizado sino problemas puntuales en algunos sectores. Nos preocupan otros factores. Por un lado, la subocupación asociada a la fuerte desaceleración en la actividad, que debería atenuarse en los próximos meses cuando la economía vuelva a crecer en un entorno de entre 3% y 4%. Por el otro, la baja capacidad de generación de nuevos puestos de trabajo asociada a problemas mucho más difíciles de desactivar.
Concretamente la caída en la inversión (IBIF). Los datos oficiales del segundo trimestre dan cuenta de una baja de la IBIF de casi 4 puntos porcentuales con respecto a igual trimestre de 2011 (de 25% a 21%). Si bien en la segunda mitad del año se estaría recuperando, difícilmente supere 22% del PBI en el promedio del año. Un retroceso muy significativo respecto a los más de 24 puntos registrados un año antes.
Éste es el principal desafío del gobierno. Reactivar la inversión requiere, no sólo concentrarse en su financiación, sino también en otras problemáticas. Por caso, el margen de rentabilidad de muchas empresas sigue en baja desde el pico alcanzado en 2010. Las ganancias de las compañías que cotizan en el Merval se redujeron 20% en el último año y paralelamente el costo del capital se encareció significativamente. Más percepción de riesgo y menos rentabilidad es una ecuación que evidentemente deprime los “animal spirits” inversores.
Si no se trabaja sobre estos, y otros aspectos, el crecimiento del año próximo puede terminar impulsando sólo modestamente los niveles de empleo.

Ohio Romney Rally - Interviews with Supporters

Raw: Staten Islanders Feel Forgotten After Sandy