Welcome to the SRP Forum! Please refer to the SRP Forum FAQ post if you have any questions regarding how the forum works.
SRP_JSON Technique for Non-Trivial Data
Playing with SRP_JSON - loving it! Questions arising…
Sending / receiving non-trivial amounts of data, and trying to do it in a minimalist fashion, but still complying with JSON standards - say customer data that looks like this:
{"data":[
["ABC001", "ABC Corporation", "Perth", 6000, "0400 123 456"],
["ABC002", "ABC Blinds", "East Perth", 6001, "0400 987 456"],
["DEF001", "Defence Industries", "Como", 6152, "9386 1234"],
["XYZ999", "XYZ Enterprises", "Perth", 6000, "0400 123 456"],
…
],
"def":[
{"name": "Cusno", "type": "T", "def":[100,false,true,"",true,true,0,"L","","",""]},
{"name": "Name", "type": "T", "def":[100,false,true,"",true,true,0,"L","","",""]},
{"name": "Suburb", "type": "T", "def":[100,false,true,"",true,true,0,"L","","",""]},
{"name": "Post Code", "type": "N", "def":[100,false,true,"",true,true,0,"R","0","",""]},
{"name": "Telephone", "type": "T", "def":[100,false,true,"",true,true,0,"R","","",""]}
]
}
It works fine in terms of SRP_JSON parsing and extracting the data.
However, if there are say 3000 data 'records' it takes a full 50 seconds to extract the data using SRP_JSON methods to GET each data row and then GETVALUE each array value.
I assume this is because it needs to parse to each element each time.
Is there are more efficient way of doing this using JSON?
I'm trying to build some things that can share nicely with others, and JSON seems to be most common.
I'm not constrained to specific structure - I did it this way to minimise the amount of data (ie eliminate all the repeated object names).
Thanks and regards,
Matt Jones.
Sending / receiving non-trivial amounts of data, and trying to do it in a minimalist fashion, but still complying with JSON standards - say customer data that looks like this:
{"data":[
["ABC001", "ABC Corporation", "Perth", 6000, "0400 123 456"],
["ABC002", "ABC Blinds", "East Perth", 6001, "0400 987 456"],
["DEF001", "Defence Industries", "Como", 6152, "9386 1234"],
["XYZ999", "XYZ Enterprises", "Perth", 6000, "0400 123 456"],
…
],
"def":[
{"name": "Cusno", "type": "T", "def":[100,false,true,"",true,true,0,"L","","",""]},
{"name": "Name", "type": "T", "def":[100,false,true,"",true,true,0,"L","","",""]},
{"name": "Suburb", "type": "T", "def":[100,false,true,"",true,true,0,"L","","",""]},
{"name": "Post Code", "type": "N", "def":[100,false,true,"",true,true,0,"R","0","",""]},
{"name": "Telephone", "type": "T", "def":[100,false,true,"",true,true,0,"R","","",""]}
]
}
It works fine in terms of SRP_JSON parsing and extracting the data.
However, if there are say 3000 data 'records' it takes a full 50 seconds to extract the data using SRP_JSON methods to GET each data row and then GETVALUE each array value.
I assume this is because it needs to parse to each element each time.
Is there are more efficient way of doing this using JSON?
I'm trying to build some things that can share nicely with others, and JSON seems to be most common.
I'm not constrained to specific structure - I did it this way to minimise the amount of data (ie eliminate all the repeated object names).
Thanks and regards,
Matt Jones.
Comments
Your issue is one that I recently stumbled onto myself. Creating a rather large JSON array is surprisingly quick. The extraction process, however, was starting to lag a bit. Using our indispensable SRP_Stopwatch tool, I was able to identify that the main drag was, in fact, occurring with the GET method. The bigger the array, the more GET methods were required, and the longer the entire extraction process took.
As a result, we created a new method called GETELEMENTS. You just need to use it once for your array and it will return a @FM delimited list of objects - corresponding to each item in your array. It works remarkably fast because it handles everything internally all at once versus having to marshal data back and forth through Basic+. Soon we will officially release a new SRP Utilities to include this.
The content of your JSON is interesting. It seems as if you are using JSON to store data and schema separately. Do I understand this correctly? Is this a format you created or are you following a format that someone provided you? I'm only asking because JSON is normally formatted to be self-descriptive. In your case, to find the name of a customer, someone would first need to parse the def array, map the position of the Name object in your array, and then parse out the associated position from an object in the data array. This prevents you from using a more direct way to reference the data.
Thanks for the reply.
Yes, I created that format myself! Given the amount of time it took to parse the data at the receiving end, it seemed like a good idea to minimise it - it halved the amount of data to parse. I got so frustrated with the slow speed I actually changed it and sent the data as a BASE64 encoded string of OI delimited data! :)
But that's not what I want - I really want to try to go with a more widely used standard.
By the sounds of what you're saying, sending the schema separately is going against the norm and might be difficult for others to interpret anyway.
I assume a more common / standard representation would be:
{"data":[
{“cusno”: "ABC001", “name”: "ABC Corporation", “suburb”: "Perth", “postcode”: 6000, “telephone”: "0400 123 456"},
{“cusno”: "ABC002, “name”: "ABC Blinds", “suburb”: "East Perth", “postcode”: 6001, “telephone”: "0400 987 456"},
{“cusno”: "DEF001, “name”: "Defence Industries", “suburb”: "Como", “postcode”: 6152, “telephone”: "9386 1234"},
{“cusno”: "XYZ999, “name”: "XYZ Enterprises", “suburb”: "Perth", “postcode”: 6000, “telephone”: "0400 123 456"},
…
]
}
Is that what you would suggest, even for larger amounts of data?
And do you think GETELEMENTS will do the trick?
And any idea when soon is (for the release)? :)
Thanks Don.
Thanks Don.
I think I've taken a step forward.
Looking forward to GETELEMENTS!!