I’m pulling JSON and the “per_page” is giving an int on the first page:
{“total”:77,“per_page”:50,“current_page”:1,“last_page”:2,“from”:1,“to”:2,“newest”:“2023-01-17T21:44:44.000Z”,“oldest”:“2021-10-13T23:22:50.000Z”},
Then on the second page it’s giving a string:
“total”:77,“per_page”:“50”,“current_page”:2,“last_page”:2,“from”:2,“to”:2,“newest”:“2023-01-17T21:44:44.000Z”,“oldest”:“2021-10-13T23:22:50.000Z”},
This is a real annoyance as this means I have to disable any kind of type checking to get it to work if I want to unmarshal the JSON in Go.
This is also happening with other fields as well. But only fields generated by the epi API, such as pages, mapping, etc. It’s going to be a game of whack -a-mole trying to fix them, if there’s no way to figure out why they are different across pages.
I’m still getting random strings in returns. It’s consistent with per_page, but now I’m getting some with some entered values.
Raw view on postman still shows string and int variability:
I’m also using stock Postman settings, i.e. not changing anything, even trying a different machine and network. Obviously can’t rule out Go as it would be looking at my source code, so we’ll ignore Go, but the int/string variability is consistent across Postman sessions and networks.
Even trying another API tester such as Testfully is the same:
Update:
Ok, so I’ve realized why this is. If you use the “next” and “previous” links that the API gives you, it changes the values to strings because it looks like it’s pulling from the links themselves (if you look at the 2nd link above, you can see per_page is in the link - this is generated by the API).
Great, found it. However, this doesn’t really make much sense. Why is the API not converting these values to JSON correctly?
Update 2
I’m now getting a float that’s being converted to a string on page 28 for “Length_cm” and “Weight_g”:
As I’ve updated in my post, it’s also happening in other fields as well, ones that are input fields.
This is also happening on these input fields (things that aren’t per_page, etc.) on any link you try - even non-API generated links.
This basically means pulling data for this project will require a lot of manual work. Even then, it seems that it isn’t consistent and will change. Seems like a race condition or something.
It’s interesting because I could have sworn that the float and int vals I mention were working yesterday. Maybe I’m miss-remembering. Either way, it is very inconsistent and type checking is now useless!
We have more projects starting and I will have to advise moving to another platform if it can’t be fixed/it will take a long time, as it breaks import pipelines.
I’m not trying to be demanding or whatever, just wondering if it’s something that will have any attention given to it, so that our projects can move forward! In addition to having maintainable new projects.
Alright, so it looks like it’s not going to be looked into.
For anyone else coming here with this issue, there is a workaround, not a fix, where you can make the types generic and manually create your own conversions in Golang if you use interfaces, you can likely do this in whatever other language you’re using (this is also working in a C# backend service we’re running).
It just takes some manual set up and some basic knowledge of type conversions. However, this of course means that some assumptions need to be made and type safety is no longer quite so safe, in addition whatever is causing these issues could be causing incorrect data being exposed from Epicollect’s API.
Hopefully this will be corrected on Epicollect’s actual API, but their lack of response looks as though it’s not a priority for them.