Let the user download the list of his saved stories

  • 11
  • Idea
  • Updated 3 months ago
If I move to NewsBlur, I have to make sure that my subscription list and saved stories in not locked into NewsBlur. I might have to/want to move to other RSS reader.
I can download my sites in OPML. But there is no way to download my saved stories.
This is important for backup. If I have decided to save some story out of hundreds of other stories then it has to be important to me. I would want to keep its backup save with me.

Also, as per your reply at the following link, it seems that Google Reader Sync is out of the question. This only increases the importance of backup.
https://getsatisfaction.com/newsblur/...
Photo of talha131

talha131

  • 27 Posts
  • 5 Reply Likes

Posted 6 years ago

  • 11
Photo of Samuel Clay

Samuel Clay, Official Rep

  • 6511 Posts
  • 1474 Reply Likes
This is a great idea. Just so you know, you can already download your saved stories: http://www.newsblur.com/api. It's not particularly easy unless you know how to code, it'll get the job done.

Although, what format would you want? The API gives you JSON. Do you want XML?
Photo of talha131

talha131

  • 27 Posts
  • 5 Reply Likes
Thanks for paying attention to the suggestion.

Just so you know, you can already download your saved stories: http://www.newsblur.com/api.

Yeah, I saw it and would use the api if a need arises. But you see, not every user can make head and tail of API, much less use it. A simple option 'Backup your saved stories' in Preferences would be much better.


what format would you want? The API gives you JSON. Do you want XML?

Google Reader lets us download the list in JSON but its usage is not so obvious and easy, at least to a common user.

I would rather go with the XML format. There are plenty of resources that convert XML to html and that html file can be used to import bookmarks in a browser. XML structure is easier to read and understand than JSON.

A power user will always find a workaround to make any of the format work for him. But for a common user, XML makes more sense.
Photo of kephale

kephale

  • 3 Posts
  • 0 Reply Likes
I miss this feature a lot
Photo of geoffwood

geoffwood

  • 48 Posts
  • 3 Reply Likes
Did anything ever happen with this? I'd really like to have this option available to at least have a backup I could import into another reader.
Photo of Samuel Clay

Samuel Clay, Official Rep

  • 6511 Posts
  • 1474 Reply Likes
The API is still the preferred method, as it's a text export of your saved stories.
Photo of geoffwood

geoffwood

  • 48 Posts
  • 3 Reply Likes
Appreciate the speedy response.

As a non-programmer, the API is essentially inaccessible. When I switched from Google reader I was able to import a json file. Other services still support the import of a json, so it would be useful to be able to download a json and upload it to another service.
Photo of Ben

Ben

  • 33 Posts
  • 0 Reply Likes
Still preferred by who? Please don't speak for me. Since this has been pretty much ignored for 3 years I take it you aren't going to come out and say NO to what (for you) would probably be a simple request. Can anyone in the user support community help out?
Photo of Andrew

Andrew

  • 9 Posts
  • 1 Reply Like
Why don’t you use one of these "If this then that recipes":

https://ifttt.com/recipes/search?q=ne...

I just installed a pocket integration this morning (existing recipe) that saves every news blur story I save (along with tags) to pocket. Beer in a bottle, brilliant.

There is a bunch of integrations there for google drive or evernote, it’s completely free and you don’t need to be able to code to set it up!
Photo of Ben

Ben

  • 33 Posts
  • 0 Reply Likes
Thanks Andrew! Works great but it only gave me the last 10 stories.
Photo of geoffwood

geoffwood

  • 48 Posts
  • 3 Reply Likes
Same here. The URL http://newsblur.com/reader/starred_st... is only pulling the 10 most recent saves, but if we can get around that we'd be well on our way
Photo of Andrew

Andrew

  • 9 Posts
  • 1 Reply Like
http://newsblur.com/reader/starred_st...
http://newsblur.com/reader/starred_st...
copy paste, copy paste etc etc
A bit tedious but it works!
Photo of Andrew

Andrew

  • 9 Posts
  • 1 Reply Like
Just continue incrementing the page number until you run out :)
Photo of Ben

Ben

  • 33 Posts
  • 0 Reply Likes
omg, with over 1000 stories...thx but sure would like to see a &n=1000 option there.
Photo of Samuel Clay

Samuel Clay, Official Rep

  • 6511 Posts
  • 1474 Reply Likes
I'm not sure what you would do with your saved stoires in any other format. They have to come in some format, the API provides a clean and accessible format for other services to integrate.
Photo of Ben

Ben

  • 33 Posts
  • 0 Reply Likes
I am sure it does. Did you miss the part about the API not being accessible to non-programmers? Available <> (or !=) accessible. The issue is not the format. (edit: you even acknowledge this difficulty in your first response in this thread over 2 years ago. I don't think you understand what we are asking for here Samuel)
Photo of geoffwood

geoffwood

  • 48 Posts
  • 3 Reply Likes
What I'd like to do is be able to save them in another reader as a backup. Feedbin can import a json file of starred items, I'd just like to get them out of newsblur in the same format I brought them in from Google Reader, as a json file.
Photo of Ben

Ben

  • 33 Posts
  • 0 Reply Likes
Ok, I can understand why there is zero motivation by Samuel to implement this as it would "allow" users to transition their own data to another service. Please, if this is something you will not work on just say so. My motivation is not to take to another service but to parse out articles for different threads of my research. Sheesh. Since I am not expecting a reply, can someone provide some actual sample API code?
Photo of N/A

N/A

  • 1 Post
  • 1 Reply Like
Using the API to get your starred stories does not require any programming ability, nor any code at all. Simply visit http://www.newsblur.com/reader/starre... in any browser and save that file.

This is definitely not meant to discourage people from taking their data elsewhere. NewsBlur puts a "Download OPML" button right on everyone's preferences page specifically to let you pack up your data and leave any time you want. (That doesn't include starred stories since there's no standard way of listing them for RSS readers, but that's enough to get up and running on any other mainstream product.)

Rather, the reason programming skills are even in the discussion is that once you've downloaded your starred stories you'll presumably want to do something with them, which likely will involve some programming. If you're looking at a service that already knows how to read the JSON file and do interesting stuff with it, then downloading it from the API at the URL above is really all you ever need do.
Photo of Ben

Ben

  • 33 Posts
  • 0 Reply Likes
Well, same issue as above. This limits you to just 10 starred posts. I want to be clear about what is being asked. I don't want to beat a dead horse here but: Is there a way to download ALL of your SAVED stories to json/opml/csv/whatever (see the very first post). I can manipulate those resulting formats to my needs.

It is known that you can download the opml of your subscribed feeds.
Photo of Ben

Ben

  • 33 Posts
  • 0 Reply Likes
Bump? After seeing requests a whole 1 day old get a new feature in the app, here's to hoping a feature request that is 3 years old get integrated.
Photo of Ben

Ben

  • 33 Posts
  • 0 Reply Likes
Ok, with absolutely no desire by the owner to implement this feature, I am willing to PAY anyone to build this feature and submit it through whatever process the sole owner would approve. Takers?
Photo of Samuel Clay

Samuel Clay, Official Rep

  • 6511 Posts
  • 1474 Reply Likes
Whoever has the time to take this on, it wouldn't take very long. Just auth the user, and grab their /reader/starred_stories until it returns 0 stories. Possibly provide it in RSS format, but something else might work as well.
Photo of Ben

Ben

  • 33 Posts
  • 0 Reply Likes
"it wouldn't take very long."

I am pretty insulted you provided this reply Samuel. So should the other folks requesting this feature for going on 3 years.
Photo of Samuel Clay

Samuel Clay, Official Rep

  • 6511 Posts
  • 1474 Reply Likes
Ben, I'd rather you didn't use that tone, but I understand where you're coming from. I'll try to hook up some sort of downloader tomorrow. It'll be in a format you can't use anywhere else, but at least it'll be a convinient way to store your saved stories.
Photo of Ben

Ben

  • 33 Posts
  • 0 Reply Likes
Well, it's easy to be frustrated when you earlier stated "it's a great idea" and "it wouldn't take very long".

Regarding "a format you can't use anywhere else" I don't believe you recall my earlier comment "My motivation is not to take to another service but to parse out articles for different threads of my research." Nobody else in the thread is asking for an unusable backup - I assume you are doing backups of all of our unusable data.

Earlier, you do state "what format would you want? The API gives you JSON. Do you want XML?" or the comment yesterday about "RSS format" yet today it's "a format you can't use anywhere else". I can't seem to reconcile all these conflicting statements you make here.

My vote is for any of them, or all of them - anything I can do something with the stories for my research and sharing with students and other faculty.
Photo of Samuel Clay

Samuel Clay, Official Rep

  • 6511 Posts
  • 1474 Reply Likes
So I'm not able to get to this today, but it's on the list.
Photo of Ben

Ben

  • 33 Posts
  • 0 Reply Likes
I hope it's still on the list.
Photo of Ben

Ben

  • 33 Posts
  • 0 Reply Likes
Bump again this week.
Photo of Ben

Ben

  • 33 Posts
  • 0 Reply Likes
Bump again this month.
Photo of John Morahan

John Morahan

  • 86 Posts
  • 27 Reply Likes
Photo of Ben

Ben

  • 33 Posts
  • 0 Reply Likes
Is there a way to integrate that into the newsblur website?

Getting errors. Trying to troubleshoot. Here's the code I used (thanks to adept) to get it to "work":
http://pastebin.com/gN1P9UQa

So, this grabs each page (same issue as listed above with just 10 stories per page), headers and all, and puts them into a json. I now have over 100 json files, and can't concatenate them because of all the headers. Is there a way to put them all into one file with a single header? Maybe someone who knows jshon a bit better?

Or, maybe Samuel can just do this, it's his code...
(Edited)
Photo of John Morahan

John Morahan

  • 86 Posts
  • 27 Reply Likes
So I started trying to improve it and ended up rewriting it in PHP...

https://github.com/jmorahan/newsblur-export/releases