BigQuery public datasets would be a better hosting platform for this kind of data. I worry they are not anticipating the security & budgeting issues of hosting a real-time API.
With PUblic Datasets, the account making queries pays for the queries. NPS only pays for the storage (which is minimal).
With this API, NPS has to pay for every call to the API. That’s not cheap.
Requiring use of a private party to access public data is usually something we discourage.
Private parties that the customer needs to pay access this NPS public data:
* AWS
* Comcast for their internet service
* Apple for their laptop
* A number of software providers for their development tools.
But asking the customer to pay google to query the data is crossing the line?
Why would someone who just wants to access the data need to pay for AWS? And the rest can be avoided by using a library PC & open source software. Or more likely, are already things almost everyone has on hand anyway.
Every request to EC2 costs money. TANSTAAFL
What?
There Ain't No Such Thing As A Free Lunch. It's an old saying meaning nothing is really free. If you aren't paying money, you're paying some other way.
Who exactly am I paying for the software on my laptop?
Taxes are, generally, money.
Majority of your bullet points would be circumvented by running your own server and developing on a linux OS.
Where do you host this hypothetical server? How does it get internet access?
FYI, you can edit your posts for an hour. Instead of reposting, just add your new thoughts onto your previous comment?
Who makes servers?
What's stopping me from accessing this without an AWS account, over Frontier, on a Thinkpad?
That's the difference.
You’re thinking small. I’m thinking big. That’s the difference.
You're arguing in favor of making consumers require an account at a company that's already centralized too much of the web.
That's fundamentally a lot worse than the government paying hosting costs to one particular vendor for a commodity service.
You really aren’t. You’re talking like someone who is willfully ignorant of the decades of internet history that have preceded this conversation.
People have quite literally died over the issue of public access to public data. It’s quite an important belay point to arrest the deterioration of the spirit of open networks.
Well, no, a customer has choices for most of those, because the government isn't hosting the data exclusively with a private vendor that charges the customer for access, providing an exclusive franchise to that vendor.
That was what was suggested upthread.
Requiring the user to have certain capacities to access data, where those capacities are provided by a number of competing vendors (and some by free, gratis and/or libre sources) is a very different thing.
NPS is hosting this data on AWS , a private vendor. And NPS (ultimately taxpayers) pay for every query.
So are you ok with some chinese APP company making 50 crappy NPS themed apps and having taxpayers pay for the backend?
Addressed in a comment in another subthread, which I know you are aware of since you responded to it, too: https://news.ycombinator.com/item?id=39086270
I will make that trade every day of the week if it means access continues to be through a standard protocol (HTTP) and not beholden to any particular vendor.
My second hand laptop isn't apple, my host is a raspberry pi, not AWS. I don't use comcast - I have a wide choice of providers including free ones (at my local library), and I've never paid for a development tool
The federal government pays Comcast to provide internet to low income households. And you can actually access this data on any old brand of laptop, or the desktop computers provided for use in most jurisdictions, and do not have to pay for any development tools to do so.
This argument is nonsensical.
Site hosting is not a customer cost.
The rest you list are costs orthogonal to this service.
Yes.
Why are you arguing for a US government agency to require its citizens to pay for access to data which they have already paid for by funding said agency?
Tell that to US government agencies publishing their announcement and other news on Medium.com.
Do you need anything other than a web browser to access it? Medium is just the server that it’s hosted on.
you have to accept TOS and in some cases pay for a subscription
Paying for subscription is only when the publisher has opted into monetization. Which isn’t the case for US government agencies.
That said, I hate Medium with a passion and that things like the Netflix tech blog are hosted there.
They’re hosting on AWS . So either taxpayer pays for hosting, or the customer pays .
Parent thread said:
Keyword is access. Hosting on AWS is an implementation detail that doesn't block the end consumer from accessing the data.
There are 4-5 other assets the customer needs in order to access it. So one more wouldn’t be a big deal.
At what cost?
I agree with this in theory but in practice it would be unrealistic and honestly a misuse of government funds for them to reinvent and maintain a fully in-housed stack for all of its digital services.
Are you suggesting the government put their public access API behind a paywall?
I’m saying they need to be more careful about taking on the hosting costs from this data . They should choose a solution that makes the customers pay for accessing the data.
You’re going to see a ton of NPS knockoff apps use this API as their backend and the devs are just going to let the AWS autoscaler keep going up and up as the taxpayers pay for it.
They are.
The NPS's customers are the American public, and they are paying for the data via taxes (approximately; government finance is more complex than that, and taxes don't really pay for spending that's a model that really is only true when working with commodity and not fiat currency, but, its reasonable enough for this purpose.)
What you want isn't for the current customers to pay for the data, but for the consumers of data to be viewed as customers and then charged for the data. but that potentially makes things more expensive for the NPS's customers, for instance, if one of the significant consumers turns out to be other public agencies, who then are paying a direct cost which pays for both the access to the data and the additional costs of billing and account management that a consumer-pays model imposes, and paying the overhead for the payer-side costs associated with payments, as well as the actual amount of the payments, then you end up ultimately with largely the same customers paying, but paying a whole lot of additional overhead.
The taxpayers are the patrons not the customers.
We are both.
The taxpayers are the patrons. The API consumers are the customers. When the customers make a lot of API calls or cause security issues, the patrons have to pay.
I'd argue that this is an overly restrictive framing of the situation for a few reasons.
First, patron is just another word for customer. I understand that you're using it to distinguish between two customer types:
1. The end user (presumably a future park visitor)
2. The intermediary providing an experience to the same end users (who are presumably using the intermediary to plan a future park visit)
But this is not a formal distinction, and I think it's necessary to zoom out and look at the players involved and the nature of their relationship with the NPS.
Is it not true that "Patrons" are still likely to be the initiators of those API calls?
Is it not also true that the "Customers" exist within the same tax system as the "Patrons"? If I go to nps.gov as a "Patron", I'm directly consuming NPS resources. If I access NPS information via a site that provides an alternative experience, I (the future park goer) am still the primary beneficiary of the API call.
Let's say I build a project that calls these APIs and my goal is to help people who have accessibility needs find the parks that are most amenable to their situation. Let's say this is a passion project and I'm not doing this to make money. What you're proposing would make such a project non-viable.
And I think there's a strong case to be made that the experience described in the last paragraph will consume fewer NPS resources than navigating through NPS page after page to find what I'm looking for.
And in the end, if the information helped a park goer find a place to spend their money, this is a net positive for the NPS.
This data domain doesn’t need a realtime api. They could host CSVs online with some mirrors and save millions of dollars hosting this stuff.
Ah yes, I see now. Yeah makes no sense to offer a REST response for each request.
On that note, what would the processing entail? Processing the get request and packaging the entire dataset into a REST object right? Or is it a more complex API that lets you run queries against the dataset? For that matter wouldn’t downloading a CSV also have to be packaged into a REST object?
the rest API provides parameters for filtering & pagination. all of that is unnecessary. it's a few hundred MB tops . CSV , Bigquery , anything is better than running REST on EC2
Then use their API to populate a BigQuery public dataset and make available to all.
Otherwise, perhaps we, as outside observers, need to consider the possibility that those whom made the decisions to provide this service as such did so for reasons which we may not be aware.
You’re not wrong, but it’s still perfectly relevant to discuss design decisions on hackernews. Are you new here?
Yes, I am new here. So why don't you tell me why you haven't answered this question I posited earlier:
There's a difference from paying for the information, and paying for the expense of delivering that information. You have to pay for official copies of documents like birth/marriage/death certificates. To get copies of public court documents costs as well even though you can read the data for free. Toner/paper isn't free, fancy paper for certs aren't free. You can have the data, but you gotta pay for the copy.
well said
ok good suggestion here you go
nps-public-data.nps_public_data
Looks like access denied? Good work!
the following tables are now available:
parks
people
feespasses
REST APIs are extremely battle-tested, easy to integrated with, and far more mainstream than BigQuery public datasets or any other niche technology that may or may not exist at some point in the future. If cost is truly an issue, perhaps the solution is to properly fund the NPS so it can make smart technology decisions.
at what cost?
I’m not sure if you’re serious (given the spammy nature of your posts, I’m inclined to believe not), but given that REST is the de facto standard for exchange of information between machines across the internet, I think the onus is on you to estimate how much money you think the NPS stands to save by doing it your way. Then the rest of us can evaluate whether that’s a good tradeoff.
CSV is also a de-facto standard and is more common than REST at 1/100 the cost
Do you have numbers to backup the cost savings estimate? I can imagine lots of REST implementations that are really inexpensive.
We only know based on the information we have that they are sharing CSV files via REST on AWS/EC2. That’s the most expensive way to share it, and also risky .
What’s spammy about my post? I have asked people to focus on costs when they make general statements like “all govt data should have a REST api”.
every rest api implementation is bespoke. what does "battle tested" mean in this sense ?
Sure the concept of rest APIs is mature, the but each implementation is untested.
With their API they have to write a bunch of boilerplate code to transform from their SQL db to REST. Authentication, throttling, threat prevention, encoding, etc etc.
With BigQuery they just copy the data in via CSV and Big Query handles the indexing & query engine.
Who says they have a SQL DB? This looks to be almost entirely static data, occasionally updated.
It’s Apache Solr. Most of the data is static, but alerts and events get frequent updates.
Same concern about unnecessary code and compute stands.
Whatever storage format they have, they are writing boilerplate to transform it into REST . Regardless, it will be cheaper to just ingest into BigQuery
Open source tools that will present a simple, read-only REST API over an SQL db with little to no custom code exist (so do proprietary tools, often from DB vendors and sometimes as part of SQL db products.) Same with NoSQL or sorta-SQL storage solutions.
The idea that they have to write a bunch of boilerplate code to do this is false. They might choose to do that, but its defintely not necessary.
Again, open source canned solutions that take a little bit of configuration exist for many of those, and some of them are likely shared services that they just point at whatever service needs them.
now do compute
I am perfectly fine with it being considered part of the basic, taxpayer-supported functions of government agencies to be providing the public with relevant data.
If there is a concrete abuse or wildly disproportionate cost problem in particular areas, that may need to be addressed with charges for particular kinds or patterns of access.
At what cost? Rest APIs are very expensive ways for the government to make CSV data available to the public.
They are a whole lot less expensive than tracking customer usage and billing for it, and a whole lot more useful to the public than having the data nominally publicly accessible but only "on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying Beware of the Leopard." [0]
[0] Douglas Adams, Hitchhiker's Guide to the Galaxy
not in this case that's my whole point . they are choosing the most expensive (and riskiest) way to make csv files available to the public
There's likely some truth that CSV would work well here, and would likely be cheaper to operate. I wouldn't be surprised if a lot of clients are or could be doing full transfers of the data and doing their own queries.
I'd be pretty happy with sqlite dumps too.
I don't really have an issue with the REST, though. I wouldn't be surprised if this was just a standard and cheap to set up Django+REST libraries stack. Yeah, the compute costs are higher than transferring static files, but I'd be shocked if this was taking enough QPS for the difference in cost to make a meaningful difference.
I get wanting the government to be responsible, but this veers a bit too far into Brutalist architecture as an organizational principle.
You might be fine but any taxpayer expense must be justified and cheaper alternatives explored. This is someone else's money so it is very easy to feel entitled but every penny saved here can go into other better things like conservation, infra in parks etc.
Yes, and the flip side of that is you require people to have and use Google accounts to access public data? That’s not exactly ideal.
And at what cost. Why are we paying for customers to query this data in realtime?
So someone can host a ripoff NPS app on the App Store and taxpayers now pay for content hosting?
The USPS provides a free service for address validation. You have to register an account and receive an access token. If they feel your token is using too much, they can handle it as necessary. Why this same concept couldn't be done in the same way is just lack of imagination.
You can access for free, but if you abuse or break the TOS your access is revoked. Done
USPS is a corporation with profit and loss. I still consider this irresponsible but theoretically I'm not paying for their largesse
Fine host it as CSV as a backup for the luddites.
I'd consider it a public transit service. We wouldn't be upset about people using shuttle busses to get to the parks, would we? I think long term footing the bill for an open platform with principle beneficiaries who use it is fine so long as it provides a net benefit.
If you have to pay for REST API OR shuttle busses which one gets funded?
This is a fascinating thread under this comment. Everyone is keying off of one part of the comment (querier pays) and not the more critical issue IMO - anticipating security and budgeting issues of hosting a real-time API. You suggested an alternative and everyone is pitting the status quo against that alternative instead of maybe looking for other alternatives that help address the issue.
People here clearly don’t like a querier pays model and that’s fine. But should NPS still reinvent the wheel across the SDLC to serve this data? I think there’s a compelling argument in there.
Yes thank you for noticing that. My bigger concern is NPS paying for expensive auto-scale resources for what is basically CSV files that could be hosted cheaply and securely.
REST API compute is very expensive when you include compute costs, transfer fees and admin costs to keep it up.
Not to mention the cost to implement a bespoke API and deal with security issues.
All to make CSV available!
Looking at the data made available by this API, I think it's safe to say this is fine.
On the list of alarming or even questionable things our taxes pay for, this doesn't even make the top 100.