Can't view rss feed hosted on local computer (localhost=127.0.0.1)

  • 1
  • Problem
  • Updated 4 years ago
I'm trying to view a feed running on a server on my laptop.
I get error "This address does not point to an RSS feed or a website with an RSS feed."
Is it checked from the newsblur server ??
Other viewers has not problem to see it.
Photo of John

John

  • 7 Posts
  • 0 Reply Likes

Posted 4 years ago

  • 1
Photo of Jürgen Geuter

Jürgen Geuter

  • 3 Posts
  • 3 Reply Likes
127.0.0.1 always points to the local computer. Newsblur would look for the adress 127.0.0.1/rss on its own machine.

The problem ist that newsblur is a server software that's not running locally on your machine (as your other RSS viewers seem to). You'd have to use your external IP (the one your ISP gives you automatically) in the feed adress for Newsblur to be able to pick it up.
Photo of John

John

  • 7 Posts
  • 0 Reply Likes
Hi, Jurgen.
Thanks for your help.
I'm aware that newsblur is a server software.
However it's also possible that the feeds themselves, are fetched from the local computer.
Looks more reasonable to me that the server stores the urls, but the requests are done on end computers in a distributed way.
I admit I feel too lazy to start "Fiddler"/"Charles" and start checking myself, I prefer an "official" answer.
Are you sure with your answer that all feeds are fetched in the server ?
Thanks
(Edited)
Photo of Darren Kay

Darren Kay

  • 68 Posts
  • 19 Reply Likes
As explained localhost is on your machine. You have to allow external access to your feed so the newsblur servers can collect the rss data. Either that or run your own version newsblur on your computer.
Photo of Jürgen Geuter

Jürgen Geuter

  • 3 Posts
  • 3 Reply Likes
Yes. Newsblur doesn't use your personal computer to download feeds, that'S what the server does.
Photo of John

John

  • 7 Posts
  • 0 Reply Likes
Hi, Jurgen.
Thanks for your help.
Do you know what is the motivation behind this behavior ?
Why not keep only urls/metadata/ui on server, and let end computers download feed and parse them ?
Photo of Darren Kay

Darren Kay

  • 68 Posts
  • 19 Reply Likes
Why would you create a server solution to make the client do all the work? If all the collecting, sorting, training is down on the web browser it would be slow and pointless in having a server to begin with. Why not just store the urls in your reader app?
Photo of John

John

  • 7 Posts
  • 0 Reply Likes
Same reason I use gmail, so I can access it on any computer (laptop,living room,work,random) without installing anything, just opening browser.

I guess there is some motivation for having the feeds fetched on server, not clients,
I'm just trying to figure out what is it.
Photo of Darren Kay

Darren Kay

  • 68 Posts
  • 19 Reply Likes
You just answered your own question, server based so you can access anywhere. What if you had to download your gmail messages on every device you wanted to use it on, and say you read a message on PC it would still be unread on phone. Centralise the service.
Photo of John

John

  • 7 Posts
  • 0 Reply Likes
> "What if you had to download your gmail messages on every device you wanted to use it on"
Its actually what happens, if u want to read a message on device "X", it is "downloaded" to device "X"

But I think the example is not correct, because the feeds are not private and sent to only to you.
The server can keep only the main entry url of the feed,
and hashes of the of urls of the feeds that you read,
and send this information so the client can download actual feeds.
(Edited)
Photo of Darren Kay

Darren Kay

  • 68 Posts
  • 19 Reply Likes
"downloaded" .... "LOL"
Photo of Rob Maxwell

Rob Maxwell

  • 9 Posts
  • 1 Reply Like
gmail interacts with mail servers, and you interact with it using whatever services you use (imap? https?).  if you ran your own mail server on 127.0.0.1, and wanted it to retrieve your email from your 127.0.0.1 server, you'd also need to give it an external ip
Photo of Samuel Clay

Samuel Clay, Official Rep

  • 6514 Posts
  • 1474 Reply Likes
As everybody else mentioned, this isn't possible. But I did at one point consider using a decentralized model for feed fetching, using the user's machines to fetch feeds. Unfortunately, it's a nonstarter because everybody's machine is built a little bit differently, and the amount of fetching that has to happen is astronomical. I pay $$$ in monthly server hosting bills almost exclusively for the ability to fetch and parse around 100 feeds every single second.
Photo of tedder42

tedder42

  • 149 Posts
  • 11 Reply Likes
THE FEEDS ARE COMING FROM INSIDE THE BUILDING

heh
Photo of kevinrunyon

kevinrunyon

  • 1 Post
  • 1 Reply Like
When you figure out how to install your RSS feed on Samuel Clay's server, then you can point to 127.0.0.1 all you want. I'm sure for enough money he'd do it...money can solve any problem.
Photo of Samuel Clay

Samuel Clay, Official Rep

  • 6514 Posts
  • 1474 Reply Likes
If you pay for a static IP, your ISP can hook you up and then you can run your web server off your own box. Back when I remember paying for that in the 2000s it was $5/month. Otherwise, DynDNS is an option.
Photo of Darren Kay

Darren Kay

  • 68 Posts
  • 19 Reply Likes
But that would still require him to leave laptop on all the time for polling? If he's the only feed user the polling would not be infrequent as to never actually get the data.
Photo of Samuel Clay

Samuel Clay, Official Rep

  • 6514 Posts
  • 1474 Reply Likes
Or consider paying $5/month to Digital Ocean and then you get a box in the sky.
Photo of John

John

  • 7 Posts
  • 0 Reply Likes
@Samuel.

It was quite clear to from the beginning that 127.0.0.1 can be accessed only from my pc, and not from the server, looks too obvious to discuss.

I'm still missing the objective for using the server for fetch, maybe I need more coffee.
Let's say I'm subscribed to cnn.com/rss
When I log in to newsblur, it sends the client machine the url cnn.com/rss.
The client fetches the feed and displays it.
For each item it reports to server the url/hashcode of item.
The server stores this list, and on next login will send it together with url cnn.com/rss.
So to prevent showing already read/deleted items etc.
The list can be compacted based on dates/time span, etc.
So server keeps all information/ui/etc, but the fetch is on clients.

Ok so what is not going to work in the explained above ?
(Assuming I dont want to see items from 2 years ago which are not in feed anymore,
but still kept on server - I dont know even if you support showing them)

BTW, the reason I was trying the local rss is to try and transform a forum which doesn't supply RSS to RSS, I think you do it for YouTube if I'm not mistaken.

Thanks for your help.
(Edited)
Photo of brett.wagner

brett.wagner

  • 1 Post
  • 1 Reply Like
That is not how it works. You say to newsblur "I'm interested in cnn.com/rss". Newsblur says "Great" and puts it in a list of feeds to fetch on a schedule. I imagine there might be some ranking on the backend, the more people request a feed the more it gets checked for updates.

When you log into newsblur and click cnn it shows you the latest fetch that the newsblur server performed.
Photo of James DiGioia

James DiGioia

  • 12 Posts
  • 0 Reply Likes
Simply put, fetching on the client side would be a heavy operation. You're not merely grabbing the data, you're parsing it and transforming it into something the reader can use to display. If you have 100+ feeds, making 100+ HTTP requests, then parsing all of that in the browser, given the heavy interactivity of the RSS reader already, would be a really poor user experience. On weaker machines, the page could simply hang/die, or the changes required to the JS to make sure it doesn't would mean that the processing would take a really long time.

It's not feasible, so the server does all of that, so you can throw as much processing power at it that you need and the reader is responsible for simply (or not-so-simply) rendering UI.

Plus, as Brett mentioned above, if a single server (or set of servers) has information about all of the feeds together, you can optimize things there in ways that you wouldn't be able to do in the browser, e.g. if a dozen subscribers need cnn.com/rss, it only needs to be fetched once and can be used by all of them, in addition to optimizations about how often to fetch and what/how much data to cache.

It just makes a lot more sense to put that in a server instead of the browser.

A desktop application has access the computer hardware in a way that a web page doesn't. You just can't do that kind of heavy lifting in the browser (yet?).
Photo of John

John

  • 7 Posts
  • 0 Reply Likes
@James, I agree more with your "yet?" than "It's not feasible"..

From you what you say, its feasible, but the ui maybe not as good, on slow machines.

If the server gives a functionality which is not possible, eg do a search on content of the feeds that you are _not_ registered to (or similar usage in background for some purpose), than I agree that a server is a must.

But so far I'm not convinced. Maybe there is a something like, I'm not familiar with the server and how its implemented.

But as an end user, who has "64 feeds", I really dont care if the server is saving 200 requests from other user to cnn.com/rss because it does it only once,
and before there will be about 100,000 users on the server , cnn.com will not care either.

From what you say, I understand the end user receives the same product,
(slowness is arguable).
But from unknown reasons (yet), the manufacturer of the product, is willingly pays $$$ a month to manufacture the same product.

The leads me to the conclusion, that something is missing, because it doesnt make sense to me (yet)
Photo of William Morrell

William Morrell

  • 58 Posts
  • 15 Reply Likes
It is different models of RSS fetching. The NewsBlur model provides some features that are range from difficult and impractical to impossible to accomplish with a strictly-client model.

If you have a client RSS reader, like say NetNewsWire, it works exactly as you describe. Each install will go to cnn.com/rss, download the feed, parse out the stories, and show them to you. Some (like NetNewsWire) include a sync capability, so the app on your phone knows when you've already read a story on your laptop, etc.

What NewsBlur adds are things like:
- more frequent feed fetching; NewsBlur can check the feed every few minutes, then push those updates out to all NewsBlur users. In your case, you could do the same for the 64 feeds, and make 64 requests every few minutes on a client. Or you can make one request every few minutes to NewsBlur, and it will tell you what updated. This becomes more important for big power users; I have a mere 117 feeds, I know some who follow thousands.
- statistics on feeds; it's not possible for a client RSS reader to know how many people are subscribed to a feed, just knowing the feed address. NewsBlur can track number of subscribers, how many people thumbs-up or thumbs-down a particular author, tag, or phrase in a feed, when a feed is available, when it changes, what story changes are made between fetches, and likely more that I cannot recall right now.
- native social features; most clients can take advantage of OS features to share data via email, text, tweet, facebook, etc. NewsBlur includes blurblogs, which is a bit like a mini social network just for NewsBlur users.
- feed history; if you subscribe to cnn.com/rss from a client RSS reader, you will only get the last X stories in the feed (usually something like 10, 20, 50). As long as someone else subscribed to the feed before you, NewsBlur will know all the stories that appeared in the feed going back some number of days. I think it used to be 30, and now is 90 days?
Photo of James DiGioia

James DiGioia

  • 12 Posts
  • 0 Reply Likes
@John The slowness isn't "arguable"; it's the exact reason it's not done that way. Samuel himself said above he attempted to do it that way, but it was a "nonstarter." It's not that it's impossible, per se, but that you can't get as much power out of doing it in the browser and thus can't include many of the features Newsblur currently has.