Attribution Tool is a service that has come across my desk a dozen times in the last year, referred on to me by everyone from trusted colleagues, the director of my org, and the developer himself (with whom I should note I have worked before and consider a friend). I had looked at it briefly before, but the last time someone sent me a prompt I thought it time to take a better look and write it up.

The premise is simple enough – the service provides a bookmarklet that, when clicked, creates an overlay of whatever page you were looking at. This overlay allows you to then select content on that page, for which it generates ’embed code’ to paste on your own site. Doing so will reproduce the content along with an annotated attribution link back to the original source.

There’s a few small other twists to it – the attribution link does use a microformat that describes it as an ‘attribution,’ it looks like RDF data is being created which will associate the cited data with both source and destination, and, if you create an optional account, this account becomes a central storage spot for all of your snipped content.

So the idea seems appealing. And to give credit to the developers, it is quite easy to use, and while there might have been ways to reduce the steps even further, really, it is reasonably slick. The fact that you can use it without an account is very cool. And it’s free.

But like so many things, a large part of the question of whether it will get adopted is whether the effort to use the tool (or any change to existing workflow that the tool asks you to make) is worth the payoff, makes it easier to accomplish something you were already doing, or easy to accomplish something you’re not already doing but might, if made easy enough.

The act of copying the content itself doesn’t seem to be made particularly easier, so value proposition seems to lie in providing an easier way to create attributions. Morally this seems to resonate – other than what seems like a few fringe cases, there doesn’t seem to be any real resistance in the open content/open education community that Attribution is a reasonable requirement for reuse. So we seem to be saying we want to attribute original sources, and indeed the pratice of the bloggers (and educators) I respect would also seem to support this. Indeed, Alan even coined a neologism for it:


But the word “Attribution” sounds vague.

So I tossed out a new word — Linktribution– attribution via a web link, or offering a “linktribute”.

So does make it any easier to do? Well, in my limited experience so far, not particularly. Neither the microformat nor the RDF are of any immediate benefit to me that I can see either (though I am not opposed to creating them if it’s easy enough, which this is). Having a store of ‘attributed’ content – yes, I could see that having some value.Enough to make me change my workflow? Not sure.
I *want* to like But I’m not sure. I’m going to keep trying to use it for a few more weeks, see if it rubs off. The reason I am blogging it, though, is partly so that others can have a look and give me their sense as to its usefulness, and their willingness to adapt their workflow to include something like this. What do you think? As a blogger, would you use this? As someone working on open content or open education, would you evangelize this to your users? – SWL

Advertisements Attribution Tool

18 thoughts on “ Attribution Tool

  1. My first reaction was THAT’S FRACKING AWESOME! A way to embed clips of text from one site/blog/whatever into another? Very cool stuff.

    But, it looks like it relies on a third party server. If that goes down, all linktribution falls over. Am I wrong?

    For instance (let’s see if this embed code gets past the WP filters…)

    RiP: A remix manifesto (Trailer)

    Web activist and filmmaker Brett Gaylor explores copyright in the information age, mashing up the media landscape of the 21st

    It does provide a standard hyperlink (with a bonus rel=”la:attributionCopied” attribute) but the identifiers resolve to the server using some generated GUID. Maybe that’s a technical requirement – snipping sub-text items can’t just rely on the URL as GUID – but relying on any single third party server as the foundation of an attribution system makes me break out in hives…


  2. yeah, I agree – I don’t know all the details behind the scenes or specifically around either the RDF or GUID, but my initial reaction was to wonder if it couldn’t have all been done client side. I’m hoping David or one his folks will chime in; they are smart people and I am sure there are good reasons why certain decisions were made, but agree this would give some pause.


  3. Hey Scott,

    Thanks for writing your post. You are bang-on that although the process of snipping is relatively easy, it’s not necessarily easier than “right click” “copy-paste” (Except for perhaps QuickTime movies and Flash Objects”). The ingrained behavior of copy paste is a hard one to override, so I am not certain that SNI.PS will resonate for regular bloggers.

    Hi D’Arcy,
    Actually all linkable attribution is enclosed in the embed. So if the server goes down the data will remain within the RDFa and the SNIPPEd object will display normally. However, the linkback to via the GUID to the SNI.PS site would cease to work. Eventually we envision that to be an aggregation page for displaying where the work in question is being used as an attribution tree, and a place to aggregate comments from all sources with regard to that object.

    We built SNI.PS to be used in two other products we are developing. One is a mash-up tool that we will release in a few weeks. We wanted to makes sure people were properly attributing content that they mashed up and that the original creators could easily see and find out where their content is being used.

    It was not originally conceived as being a “For profit product/service” but rather a piece of plumbing that we need to build other apps we envision emerging in a REMIX economy. To that end, we are really open to open sourcing the entire project, if we could find some advocates and interest in it.

    I’d love to hear any further suggestions or criticisms.

    Thanks for taking the time to give your thoughts on SNI.PS!


  4. Hi. I’m one of the lead developers on and I’m thrilled you’re checking out the service.

    Usability on this was a huge challenge and we put what shortcuts in we could to help with that. Try using the embed tag straight from the capture dialogue (comes up when you create a Snip) rather than going back to your library. Or perhaps you’re already doing that. Feedback and suggestions are welcomed and encouraged. 🙂

    In regards to relying on our servers, D’Arcy has about 90% of it in his second comment. We need to be able to identify specific snips in order to link them together. We did some work to help mitigate the reliance on our servers. The attribution chain is specified, not just the last link, and those point to URLs in the wild, not just GUIDs.

    The other reason is that RDFa plays very nicely with URIs as identifiers so if a Snip is placed in various places it’s always identified with the same URI and RDF services can identify it as the same subject.


    Rob Linton


  5. David, thanks for taking the time to clarify some of these points. I had wondered about the video question – I tried it on a few pages and couldn’t seem to get it to work. Would this have been because those pages themselves were embedding the videos, say from Youtube (that was the case on the pages I tried).

    Can’t wait to see the mashup tool. And hopefully the shout out about open sourcing this may net you some takers because for any of the small criticisms in this write-up, it is a pretty slick implementation that might help spread the practice of attribution further. In theory an open source version could work on an ‘archiving’ feature as well, so that when something was snipped, a copy got made as well, so that in the advent that the original disappeared, it could still be found. Furl does this with bookmarks, and it’s a value add that might convince people this was a better way to cite.


  6. Keep your eyes peeled to for the mashup tool. I’ll let you know when it is live – next week hopefully.

    Can you send me the pages that were not working? We have a special handler that should handle any YouTube video no matter where it is.

    Scraping code to determine what is an object and what is not is a pretty inexact process. Especially with the wide variety of HTML/CSS structures out on the Web. A FireFox and IE plug-in would eliminate this trouble, but getting people to install those isn’t easy. So the only way to improve the JAVAScript bookmarklet is to iterate it when we find cases where it breaks.


  7. David and Rob – please don’t take my comments as negativity. The functionality of is absolutely essential – being able to embed and provide attribution for small bits of text etc… Great stuff! And I love that the embedded/pasted code will degrade gracefully if the server is ever unavailable (say, 20 years from now… 😉 – something that would have been difficult or impossible if you’d gone the embedded JSON route or something else.

    Thanks for putting together. I’m looking forward to playing with it more.


  8. […] Attribution Tool at EdTechPost The premise is simple enough – the service provides a bookmarklet that, when clicked, creates an overlay of whatever page you were looking at. This overlay allows you to then select content on that page, for which it generates ‘embed code’ to paste on your own site. Doing so will reproduce the content along with an annotated attribution link back to the original source. […]


  9. Great tool – I look forward to discovering all the ways that it will make my life easier!

    You’re right about workflow adaptation – if we don’t make the effort to change, then we might not discover a better way of doing things. On the other hand, effort implies work, and with so much of it already (work, that is), why add more to our plates.

    For me, new technology is fun, but for many teachers it’s just simply daunting. I wonder what sort of a take up this tool will have in the ed tech community…


  10. This does sound very cool, and with RDFa there should be ways for others to use the data to demonstrate the value via other services that consume the RDFa. It would fit very nicely into Tony Hirst’s thinking about graphing conversations.

    But–hate to say it–I just tried to grab the RDFa with ARC‘s extractor, and it didn’t seem to pick it up (picked up lots of other data though). Can anyone recommend another extractor to try, or point me to other pages that have snipped things to see if ARC does better there?



  11. Neil, you’ll have to excuse me, I’m still trying to calm down, ‘cos here I was thinking NEAL FREAKIN’ STEPHENSON, my favourite author pretty much on the planet, just commented on my blog. I’m sure you get that a lot 😉

    I don’t know of anyone in K-12 using it, but the developers are following this thread and so maybe they do.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s