This is great news. The chief thing that was bothering me about the wiki-news idea was the fact that if updates to an ongoing story were all consolidated into one “living,” ongoing story, it would be difficult for users to track updates.
RSS alerts users every time a new article is posted, but if all the updating is done on an existing article, users won’t be automatically alerted, and thus won’t know to check back in. This change to Google Reader could suggest a solution to that problem.
In trying to measure social media performance and cull lessons for improvement, here are the measures/data I’ve looked at so far and why.
Analysis proved harder for YouTube than for Facebook because the ability for videos to go viral makes it harder to identify trends and harder to do controlled experimentation.
1) Number of fans over time: This measure is primarily just to assess the overall health of the account. Ideally it will show some sort of steady upward growth. It can also identify sudden spikes or drops, which can be correlated back with events.
I also looked at the average rate of growth. It was suggested that the growth rate might be a more useful measure, since the number of fans is always expected to go up seeing as it’s cumulative over time. That’s true, but as far as setting metrics targets, it might not make sense to pursue an increasing growth rate, since over the natural lifecycle of a page, the growth rate should hit a peak and then slow down. So I think for now I will just monitor both, considering my target metric an increased number of fans and the growth rate a useful indicator of the page’s lifecycle.
My last post on Haiti coverage concluded that news organizations need to be partnering to provide the best coverage and best resources when a crisis hits, and that they need to have a plan in advance for how to do this so that they are ready to deploy in the chaos of breaking news.
YouTube is about to emphasize related videos and search even further, meaning that understanding YouTube SEO is getting more important as well.
I’m still stumped by how YouTube analyzes related videos though. I understand that in theory it’s based on the title, description and tags, with ordering determined by some algorithm that also includes popularity and interactions.
But I’ve seen some weird things that don’t bear that out - some where YouTube was more successful than I would have expected on matching videos that didn’t have any matching description or tag elements and some where YouTube was wildly off the mark and matched videos that seemingly had nothing in common (including no like tags or description elements).
More testing seems the only way to get a better handle on what makes for successful placement.
“Odiogo’s media-shifting technology expands the reach of your content: It transforms news sites and blog posts into high fidelity, near human quality audio files ready to download and play anywhere, anytime, on any device.”—
Literally today I was decrying the paucity of icon offerings on Google Maps (also my own failure to learn advanced use of the API, but that’s another story)
Here are some of the icons used on a map of Obama’s travels during his first year in office. The icons indicate what type of media is inside the bubble.
I should point out that I’m having an issue with this map in FF 3.0 - not in any other browsers and only in the embedded version, not the full version. Opening a bubble with a video in it causes an unresponsive script error message. The video runs fine once the error message is closed, but it’s annoying and I can’t figure out why it’s happening. Nothing to do with the icons, obviously.
Extremely comprehensive look at a number of existing apps to evaluate what works and what doesn’t.
In line with the previous link about how to use objectives to drive UGC/social media activities, it seems that again it’s important to clearly define the objective of a mobile app. What do you want the audience to be able to do? (Obviously basing that on knowledge about what the audience actually does want to do). Knowing that will allow you to define the features that need to be included and to understand if you’re providing a clean, functional, useful user experience.
“A useful framework to draw on when thinking about how you approach UGC is the POST process for social media strategy outlined by Forrester Research (Bernoff, 2007). This involves identifying:
* People: who are your audience (or intended audience), and what social media (e.g. Facebook, blogs, Twitter, forums, etc.) do they use? Equally important, why do they use social media?
* Objectives: what do you want to achieve through using UGC
* Strategy: how are you going to achieve that? How will relationships with users change?
* Technology: only when you’ve explored the first three steps can you decide which technologies to use”—
Target countries in East Asia/Pacific and South Asia are also in the top 12. However, the percentages are sooo small compared to the US.
Does this mean Twitter’s not worth it? I don’t think so. For me, Twitter is all about reaching out to niche communities interested in a particular beat (to consume/interact with content) and part of a particular beat (to aid reporting). So in essence, all target audiences on Twitter must be small percentages of the total user base.
MediaShift has a great list of news organizations providing good resources for following the news from Haiti and/or helping survivors. Many were able to quickly deploy innovative and useful projects.
I saw some unique ways to visualize information, both in terms of using info/interactive graphics and in terms of making it easier to digest large amounts of information. There were also some new steps in tapping into social media, as well as some massive undertakings to help survivors. And it’s impressive to see what people have been able to do under the pressure of a breaking news crisis.
But the coverage also lays bare some of the challenges we’re all still struggling with, including how to make large amounts of resources digestable, how to integrate multimedia, external content and UGC with other coverage, and how to develop tools that are useful in addition to being interesting or informative.
Here are some of the projects I found particularly compelling, and some of ways I think we can all do better next time (well, hopefully there won’t be a next time, but disasters and crises are a fact of life…).
Do People Do Better on Facebook/Twitter Than Organizations?
In this post from September, Government 2.0 Beta asks: is it better for an agency or an agency head to be on Facebook?
Apparently, they’ve got Facebook fan pages for the agency and for Administrator Lisa Jackson, and are finding that Jackson’s page is growing faster than the agency page (although the agency page has 2x the total fans)
What this post doesn’t point out though is that the content of each page is quite different. Jackson’s page has a more conversational tone, and gives more behind-the-scenes tidbits:
"Day 1 in Copenhagen: Busy 1st day - met with reps from Brazil and Latin America and had bilateral talks with China. And we’re hearing so many great thoughts and questions from the young people here. Exciting times!"
The agency page, by contrast, is entirely action oriented. Watch this, look at that, vote in this. So it’s not a fair contrast.
My hunch is that if both pages contained the exact same content, it wouldn’t matter at all whether it was a person or an organization. Facebook (and Twitter and [insert social network here] users want to be spoken WITH, not AT. Beyond that, I don’t think they care whether the username is a person’s name or an organization’s name - at least, I don’t.
"First is fresh and new. It’s important to us that an article contain recent substantial information about a news topic, and it needs to be objective news to lead this cluster of stories. So press releases, satire, op-eds aren’t eligible to lead clusters.
Another factor is duplication and novelty detection. And that’s where we try to determine an original source of content from those that are duplicating the information. So something that we use there is this idea of citation rank. So per article, we can see that if a news story was broken by the Los Angeles Times, and then later another article, say in Washington, cited the Los Angeles Times as having been a source of their information, we can start to see the citation rank taking place for this story — that this article from the Los Angeles Times might have higher ranking now, because other people are citing it as being an original story.
Another factor is local and personal relevancy, and this applies to individual sections as well as editions of your publication. So what we want to do is actually give more weight to local sources that are likely more relevant to the news item. So if we take that idea of a man giving out free cars in North Carolina, it’s likely that we’d take a paper like the Charlotte Observer and know that that could be a higher authority for that story. And, therefore that article might be ranked higher in this cluster.
The last signal I want to cover in article ranking is the idea of trusted sources. For us, trusted sources doesn’t have to do with some arbitrary decision that we make, but it’s actually data driven. So, according to our data over time, did users start to look at your articles and then click on them? Let’s say that there were five articles being listed and a significant amount of users chose the third article and went to that source. Though we might start to determine that this source is actually very trusted for the certain type of information and over time, we start to build out what publications are trusted sources, but not for their entire publication. It is done on a section and category basis, so something like The Sporting News could be very trusted for sports information, but may be not so much for business. And likely something like the Wall Street Journal might be very trusted in the United States for business information, but may be not in India. So again these trusted sources have to do with section and edition, so it’s a very specific thing that we’re looking for due to aggregate user behavior. So, those are just four of the signals that we use in news search article ranking.”