Welcome to the SRP Forum! Please refer to the SRP Forum FAQ post if you have any questions regarding how the forum works.

Table not showing all records

I have a native table in OI that is showing that it has about 5000 records, but the LK file size is extremely large. When I select the records it takes a long time also, as if it had much, much more records, but it still shows that it has selected the 5000 records. When I try to delete records it takes a longer than normal amount of time to delete even just one record. I even tried to clear the table using the Database Manager - Clear Table... menu item, but it will process for a while and nothing will happen. I don't know a lot about GFE's, but I did the Verify LH Tables in the Database Manager, and it just resulted in taking a long time to process and ended up giving me some statistics about the table, but did not find any GFE's to repair. Any suggestions on how I can troubleshoot this? Luckily with this particular table the data is not crucial so I don't mind erasing all the records, or even making a new table and removing this one, but I would like to understand how it ended up this way and how to troubleshoot in case it ever happens to a table that I do need to salvage the data from.

Comments

  • If the LK is extremely large and you only have about 5000 records, then I would suspect this is a presized table designed to hold many more records than it actually contains and perhaps it is sizelocked so it won't resize. Tables of sufficiently large size can take longer than would otherwise be expected when performing table-based operations. I don't think this accounts for every oddity, especially slow reads and failed clear tables. Nevertheless, use the Database Manager > Utilities > OpenInsight Tables > LH Table Info to get some additional metrics and report them here.
  • Here is the LH Table Info

  • Well, look at your average size: 2.6 MB. Even if each record was exactly that size, you should expect each record to take a moment to read. I suspect that your record sizes aren't uniform so you have some that are even larger and some that are significantly smaller. This can cause the tables to be poorly hashed. Do you need log records that are this big? What is the actual size of the LK and OV files? I'm guessing the combined sizes of each must be bigger than 12GB.
  • The logs are not uniform. This behavior started after a particular looping error that I though would have just created a lot of debug records, but apparently it must have made one or multiple very large records in the table. I'm having trouble removing the records.
  • Oh, you said your LK file was quite large...but I think you meant your OV file. Okay, this helps explain the performance issues even more. My previous assertations were based on the assumption that the LK file was the larger of the two, which generally only occurs when the table is proactively being managed. What you have here is the common result of a table growing much bigger than it was originally designed for and thus most of the data is found in the OV (i.e., overflow) file. Therefore, you have a two things working against you:
    • Records that are so large they can't be read in one pass because they won't fit into a single packet.
    • The majority of the data living in the the OV file, which means one pass has to go to the LK file to get the beginning of the record and then it has to go to the OV file (without the benefit of hashing!) to get the rest of the record.
    Honestly, this is about a worst case scenario as it gets.
  • Ok - sorry, I rushed to make this post and I didn't double check my info. I had remembered the LK file being large, but you are correct, of course, that it is the OV file. Any way to resolve it?
  • The logs are not uniform. This behavior started after a particular looping error that I though would have just created a lot of debug records, but apparently it must have made one or multiple very large records in the table. I'm having trouble removing the records.


    You said that your main concern was trying to understand what caused the problem and that under this condition you would have no problem recreating the table. I suggest you do this rather than fight against a table that seems to resist your attempts to clear it. You can create a new table with a temp name (e.g., DEBUG_LOG_NEW), copy over the dictionary records to save time, delete the original table, and then rename the temp table to DEBUG_LOG.
  • Any way to resolve it?


    Does the above help you resolve it?
  • Yes - thank you for explaining it so I could understand what was going on and thank you for the explanation of how to quickly re-create and delete the old table data. I appreciate it!
Sign In or Register to comment.