I have an API "wrapper" that retrieves sitefinity content for my webpages. It works fine, but it can be a bit slow for large pages with many pieces of content. Since I can't alter the site layout/design, I'm trying to address performance by caching and improving load times.
I've implemented the ASP.NET output caching, which works really well
but it is unpredictable in terms of how long it will cache the pages even though I have specified a hard number in all the pages and in the config. I suspect the cache is hitting some limit in memory (when crawling all the pages, asp worker process gets to about 750-800mb and then it drops back down to 600mb) and then rolling over. I read somewhere that ASP.NET manages that size dynamically based on system resources or something... There's always DB caching but we're using authentication on every page, so the cache substitution controls are required, which means that other cache implementations are not an option. Love the caching but it seems our site is just too big to rely on it alone.
Our content is stored based on categories, with tags used to distinguish between product-specific documents. Based on the developer's guide, I came up with the following code to retrieve content for a given product:
> FindContentGivenLibraryAndCategoriesAndTagNames(LibraryType type, IEnumerable<
> categoryNames, IEnumerable<
var list = new List<
var contentManager = GetContentManager(type);
foreach (var categoryName in categoryNames)
var filter = new IMetaSearchInfo
new MetaSearchInfo(MetaValueTypes.ShortText, "Category", categoryName)
var listofItems = contentManager.GetContent(filter);
foreach (CmsContentBase document in listofItems)
if (IsNotFromCorrectLibrary(document, type)) continue;
var foundDocument = FindDocumentMatchingTags(contentManager, document, tags);
if (foundDocument != null)
private static CmsContentBase FindDocumentMatchingTags(ContentManager contentManager, CmsContentBase document, IEnumerable<string> matchingTags)
var allTags = contentManager.GetTags(document.ID);
var found = allTags.ToArray<ITag>().Join(matchingTags, allTag => allTag.TagName, matchingTag => matchingTag,
(allTag, matchingTag) => matchingTag);
if(found != null && found.Count() == matchingTags.Count())
The obvious performance issue here is that I have to go back to the db for each document to retrieve its tags before I have enough information to perform the match. Considering I have some categories with 200 or even 300+ documents/content items and some of these pages pull in up to a dozen of them, it's no wonder some pages can take 10-15 seconds to load.
Is there a better way to pull in tag information as part of the "MetaSearchInfo" filter? Or, is there a (supported) way for me to query the db and skip past the API? I think I read somewhere that the Nolics ORM provider is getting phased out, and I've avoided using it thus far. I'm reluctant to go against the DB unless as a last resort.
We can also upgrade to 3.7 SP3 (we are currently on 3.7 SP1) and I understand there are some performance improvements for large db's... but the way I'm going after this content is just not efficient and I fear it'll be slow even in 4.0 unless I change it.
Appreciate your advice, I'm sure there has got to be a better way...