Welcome to the SRP Forum! Please refer to the SRP Forum FAQ post if you have any questions regarding how the forum works.

Fastest access?

Question for the "under the hood" guys..

Considering 3 ways of "reading" records from OI tables: READ, READO and XLATE.

I'm currently stripping the file variable of any MFS's for the purpose of this exercise, and storing file variables in a labeled Common.

Other than being able to reference by field name (for symbolics), is there any reason to use/prefer Xlate over a straight Read/ReadO?

I'm considering coming up with an Xlate replacement that would utilize raw file variables for both data and dict Reads, likely using CalculateX for symbolics. Also considering buffering the last 5000+ records read (for big batch processes).

I'm working with a client with some lazy programmers, so I'm trying to come up with a way for them to use their repeated Xlates/equivalents without the serious performance hit we're seeing.

Any suggestions?

Comments

  • In my opinion, Xlate is only preferable for two reasons:
    1. Syntax convenience.
    2. Caching benefits.
    If you want the absolute fastest performance then call the BFS directly.
  • I've done that before too :) Is there an advantage to calling the BFS over doing a Read/O from the raw file variable?

    What are the caching benefits for an Xlate?

    Thanks again!
  • Read/O statements initiate the MFS/BFS chain, so calling the BFS directly offers a little less overhead. For individual transactions the benefits are negligible. If you are processing thousands of transactions then it might add up.

    Caching simply means it returns the last known record for a given Key ID without having to read from the disk. So, unless you are reading the same Key ID more than once, there is no benefit to caching.
  • In this case, they could very well be reading the same record many, many times during the process.

    Would using Xlate be better than using the BFS and doing the caching myself?
  • Xlate won't bypass any MFS routines in the chain, so if that is important then you might want to roll-out your own solution. We have our own solution that does this for just this very reason.

    Note: Xlate only caches up to 9 reads (regardless of which Key ID) and then it recycles the cache. So, this caching is not indefinite.
  • Thank you, kind sir!
  • edited September 12
    "Caching simply means it returns the last known record for a given Key ID without having to read from the disk. So, unless you are reading the same Key ID more than once, there is no benefit to caching."

    This is good to know. I have a routine which gets called 1000s of times, and each time it's called it needs to read a config record. Since it uses xlate, it won't have to read it every time, only once? Perfect.

    *I could pass the config record to the routine, but i'd rather not as it makes the routine more complicated to call...
  • Since it uses xlate, it won't have to read it every time, only once? Perfect.

    Well, as I noted above, Xlate will use its internal cache 9 times and then it goes back to the disk. So, you need to decide if that is enough for you. We wrote our own service module (Database_Services) with a ReadDataRow service to replace Xlate and normal Open/Read statements. This service has a parameter that allows the developer to determine if cached data should be used and how stale the cache can be (i.e., how old must it be before it gets re-read from disk).
  • I created 2 functions to speed things up. One is a replacement for an Open call, called TOpen. You pass in the table name and it returns the table variable. The 2nd (optional) parameter is a flag that says whether to return the "raw" variable (i.e. with no MFS's). It maintains an array of tablenames and variables so you only have to "open" a table one time.

    The 2nd function is where the magic happens. It's a replacement for Xlate that I've called TLate. Like XLate, you pass the tablename and the ID. There are 2 other (optional) parameters: the FMC or Fieldname that you want to extract and one more boolean flag. The ID and Field parameters work just like in XLate; and if you leave null, it will return an entire record. TLate will use RTP65 to create and maintain a hash table. Any calls to TLate will first check the hash table to see if the record has been read before. If not, it reads the record and writes it to the hashtable. The record, whether hashed or read, is used to fulfill the call. The optional 4th parameter is a flag that will cause TLate to bypass the hash table and just read from the table. Of course, the OI table in question is opened with the TOpen function.

    My client had a process that involved the reading of hundreds of support records for every one "main" record, and hundreds of "main" records, which resulted in hundreds of thousands of Reads, many of which were the same records over and over. It took over 7 Minutes to return results. Using these functions, the process runs in 9 seconds! HUGE improvement. It won't work for every purpose, but I highly recommend.
  • @donbakke
    "Well, as I noted above, Xlate will use its internal cache 9 times and then it goes back to the disk. So, you need to decide if that is enough for you. We wrote our own service module"
    woops I missed that part. OK, i will just pass in the data rather than reading it every time...even though that is annoying.

    @Michael
    hi sounds complicated...not sure if i can do this in my system. But yes, i had a similar problem to you,and i used srp hash table.
  • It wasn't as bad as you might think. And it was a huge performance gain. We're in the process of utilizing it in other processes as well now.
  • @josh - I'm glad I reiterated the issue with Xlate. What Michael is describing is virtually the exact same thing our Database_Services module does (we also have an argument so the table handle is stripped of the MFS chain for the same reason he offered). Also, the Database_Services module uses the SRP HashTable for caching. Thus, the whole thing is abstracted, which is what I imagine Michael's routine does.

    I don't think you will find it too complicated, but you will probably want to build this out in small steps. Or...you could always ask to someone to provide you their own solution. If you have either our full FrameWorks or the HTTP Framework product then you will also have Database_Services already.
  • @DonBakke is there an official write-up on Database_Services
  • @BarryStevens - Yes, in fact there is. It's been added to the SRP FrameWorks reference documentation (since that's where it originated but I decided to include it with HTTP Framework as well):

    Database_Services
  • @DonBakke Ok, thanks.
    Sorry I must have missed the announcement.
  • @BarryStevens - No, you didn't miss anything. My response was to explain why you weren't aware of this module being in the HTTP Framework product. As the SRP HTTP Framework product evolves, I sometimes decide to include more service modules that are already in SRP FrameWorks. These often don't get announced formally because I'm only using a subset of the services for internal purposes. However, developers get the benefit of the full suite when this happens so I'm happy to use opportunities like this to make developers aware.
  • Ok, got it. Hence, it always pays to ask 'is there something that will do something'
Sign In or Register to comment.