Performance Tips

Apr 18, 2011 at 5:41 AM
Edited Apr 18, 2011 at 5:45 AM

I'm saving 46,000 records and it's taking around 4 mins. I initially handled this myself direct to iso store in one large file and it took around 35 secs. I understand that saving many files will be slower than one large one but it was at least several minutes in before the flush was performed. I can't believe that it would take a few minutes to lookup keys and indexes in memory?

I have a simple class that I'll list at the end. The key is a Guid and I have a single index. No relationships with other classes. I flush once after I have saved all the classes.

Any tips on how to improve the performance? I guess that each record has to check the key to decide whether to insert or update the record. Is there any way to tell Sterling to bypass the key check and to just insert?

Here is the class - downloaded via WCF service.


    public class GridDetailDto
        public Guid GridDetailId { get; set; }

        public int GridMasterId { get; set; }

        public int JobId { get; set; }

        public int ScreenId { get; set; }

        public int RowId { get; set; }

        public string FieldName { get; set; }

        public string FieldValue { get; set; }

        public GridDetailDto() {}


Here is the table create script:

CreateTableDefinition<GridDetailDto, Guid>( k => k.GridDetailId)
   .WithIndex<GridDetailDto, int, int, Guid>(GRID_DETAIL_MASTER_JOB_INDEX, t => Tuple.Create(t.GridMasterId, t.JobId))
Apr 18, 2011 at 2:51 PM

First a question: is this is on the phone or on the desktop? Assuming phone?

Second, unfortunately most of the time there is no Sterling at all, but the speed of isolated storage. Isolated storage on the phone is slow. To test it, you can create a List of 46,000 elements and just save it using one of the built-in data contract or xml serializers and you'll find it takes a LONG time. It's a scenarion that's not well supported and it shouldn't be, because the phone is not intended to be a large scale database. It's meant to store local, important information and then synchronize with other sources for the heavy lifting.

About the only thing I can offer here is to consider breaking up the tables perhaps, or considering how the data might be chunked and whether it all needs to be saved at once. No user can deal with 46,000 pieces of information at once so the question becomes whether there is the opportunity to load segments on demand as they require information, etc. Tough to say without understanding more of the nature of what you are trying to accomplish.

Remember there is an asynch save option you can pass a list to as well and do it in the background.

Apr 18, 2011 at 8:57 PM

Thanks for the suggestions Jeremy. Will try some chunking!