Machine-assisted editorial curation: does it risk putting us in a “bubble of our own”?

  1 m, 23 s

So I watched an interesting video on some of the mechanically automated editorial work that is happening on major sites like Google and Facebook (see the video here:  What’s interesting about this talk isn’t the idea that these sites are making decisions about what content you will and won’t see, it’s that they aren’t even telling their users that these decisions are being made. 

I found it fascinating and kind of scary that a Facebook newsfeed would automatically prune out all of this guys “republican” content in favor of his “democratic” content simply because he clicked on the democratic links more frequently.  I already worry that as a society we spend too much time seeking out opinions like our own and far too little time seeking out dissenting opinions. 

Now we have the technology helping us down this road.

As a company that provides technology that is often used for this sort of automated editorial work I feel it’s important to examine the effects of our work to assure that we’re not doing more harm than good.  Summarization and Dominant Ideas are the sort of features that are absolutely required in the world we live in, there is simply too much information flowing by for us to read all of it, so using technology to reduce the stream to a manageable volume isn’t only just convenient, it’s absolutely necessary. 

The trick is to make sure people understand the potential negatives so that they can make intelligent decisions about how to acquire and digest content.