Sunday, January 24, 2016

Windows 7: Windows could not search for new updates (Code 80244019)

Problem

You are performing maintenance on a user's Windows 7 machine.  You find that it is configured to notify but not download updates. You perform a Check for updates.  A long list is returned.  From the list that is returned, you manually check several of the updates and initiate installation.  You then experience the following error:
You repeat this process by selecting a different set of updates. Each time, you experience the above error.  If you select all updates, the update process appears to hang for an hour or more.

Solution

  • Configure updates as: Install updates automatically, and then wait as long as necessary until all updates are installed.
  • Once all the updates were installed, you can configured updates back to Check for updates but let me choose whether to download and install them without further issue.

References

  • None

Notes

  • It's unclear why this works.  If you know, please leave a comment.

Friday, January 22, 2016

SharePoint 2013: search crawls taking unusually long time to complete

Problem

You find that crawls are taking unusually long on your production farm, with craw rates around 7-10 dps, where 30-40 is what should be expected.  You check user rights assignments of the service account and find that all user rights are appropriately configured.  You also find that your staging farm, which is configured identically to your production farm, shows expected crawl rates of 35+ dps.  You are unable to immediately identify what is different between the two farm configurations.  

Solution

  • Check the Enterprise Search Service performance level using Get-SPEnterpriseSearchService.  If this is Reduced or Partially Reduced, change to Maximum.

Reference

  • It had been over six months since I last engaged this configuration item and so had quite forgotten it.  This was resolved with the help of Microsoft Support.
  • Initially, experienced full crawl times of about 4 hrs on 110K items at 65 GB, with crawl rate of 7-10 dps.  Then discovered performance level was set to Reduced.  Changing this to Partially Reduced improved these metrics to full crawl time of about 1 hr, with crawl rates 40-42 dps.  This is a 400% improvement in going from Reduced to Partially Reduced.

Thursday, January 7, 2016

SharePoint 2013 TIP: how to easily and cost-effectively get unique users metrics

This SharePoint 2013 TIP presents a method for obtaining unique user counts quickly and easily and for free.  This method uses existing IIS features and relies upon the capabilities of Microsoft's Log Parser 2.2, which is available free.  It involves just two small files: a batch file you execute in a command window and a configuration file containing the SQL.  Running it generates three CSV files: the first two are interim data files; the third one is the one you want: it contains a listing of dates and unique users per date.  Once the script completes it also writes to the console the total unique users found over all log files analyzed.  I run this script on a monthly basis to update unique user metrics.  I've also used it to analyze unique users for an entire years's worth of log files - no need to change any configuration when running the analysis on a month's log files or a year's worth: just drop the two files into the folder containing all the log files and execute.  What is provided here are the basics to help you get started.  OOTB, you can start capturing these metrics right away using the method presented here.  Once you get up to speed, you can customize to many other applications.

Solution

  1. Assumptions: The following are some basic assumptions to keep in mind when engaging this method:
    • Windows Server 2012
    • IIS 8.0
    • W3C Log File Format
    • One loge file per (IIS) site
    • Daily Schedule
    • Two WFEs (NLB)
    • Default log file naming
    • IIS Advanced Logging extension not installed.
    • Windows NLB
    • Crawl service account: spContent
    • Farm service account: spFarm
    • SharePoint setup user administrator account: spAdmin
    • Log files end in ".log".
  2. Preparation
    1. File Naming: By default, from my experience, for standard Windows Server 2012 installations of IIS 8, the log file naming will have the format: u_exYYMMDD.log.  It will have this format for all servers.  Thus, you cannot dump them all into a single folder without performing some renaming ahead of time.  I append the WFE number to the files.  For example, log files coming from the first WFE in one of my farms will be renamed like so: u_exYYMMDD_WFE1.log.  The highlighted part is the part added. Check your own log files to see how they are produced:  if they follow the same approach to naming as shown here then great: you don't need to customize the script. Otherwise, you'll need review the script and make appropriate changes to the SUBSTR methods consistent with your file naming approach.
    2. File Consolidation: You will need to copy all IIS log files from all WFEs into a single folder, renaming them slightly so as differentiate log files among the servers.  You may have 2, 5, 10 or more log files for each given day.  
    3. Running the script: if your file naming is the same as noted above, just drop the two files into the folder containing your consolidated log files and then execute.  When running Log Parser script from the command line, it runs amazingly fast.  I have been able to analyze many GB of log files within 10 seconds or less.
    4. Customizing: you'll note, when you open up and review the load file, that I filter out farm service accounts and other obviously unhelpful entries (e.g., 401, NULL).  These WHERE statements you'll need to customize to your farm.
  3. Files: here are the two files you need.  Just copy and past.
    1. guu.bat
      logparser -i:IISW3C file:guuLoad.txt -o:CSV  -q >guuOutStep1.csv

      logparser -i:CSV -o:CSV "select UserName,Date from guuOutStep1.csv group by UserName,Date" -q>guuOutStep2.csv

      logparser -i:CSV -o:CSV "select Date, count(*) as users from guuOutStep2.csv group by Date order by Date" -q>guuOutStep3.csv

      logparser -i:CSV "select COUNT(DISTINCT UserName) AS Total-Unique-Users-for-this-Month from guuOutStep2.csv"
    2. guuLoad.txt
      SELECT REPLACE_STR(TO_LOWERCASE(REPLACE_STR(cs-username, '0#.w|', '')),'DOMAIN\\','') as UserName,

      STRCAT(STRCAT(SUBSTR(EXTRACT_FILENAME(LogFilename),6,2),'/'),STRCAT(STRCAT(SUBSTR(EXTRACT_FILENAME(LogFilename),8,2),'/'),

      SUBSTR(EXTRACT_FILENAME(LogFilename),4,2))) AS Date

      FROM *.log

      WHERE

      sc-status<>401 AND
      cs-username<>'0#.w|DOMAIN\spcontent' AND
      cs-username<>'0#.w|DOMAIN\spadmin' AND
      cs-username<>'0#.w|DOMAIN\spfarm' AND
      cs-username<>'' AND
      cs-username<>NULL AND
      cs-username<>'DOMAIN\SPCONTENT' AND
      cs-username<>'DOMAIN\SPADMIN'

References

  • Claims-based naming: For my farms, I find that usernames captured in the IIS logs include both standard (e.g., DOMAIN\First.Last) and claims-based (e.g., 0#.w|DOMAIN\First.Last) versions.  Therefore, I filter for both versions.  
  • Case sensitivity in WHERE clauses: I also find that the standard version is upper-case, while the claims-based version is lower-case. Since the WHERE statements are case sensitive, I've had to account for this in the SELECT statement using the TO_LOWERCASE method.
  • Different syntax for common TSQL terms: Log Parser 2.2 implements a syntax that is very similar to TSQL, but not quite.  There are some odd but understandable variations, such as REPLACE_STR for the TSQL REPLACE method; or STRCAT for the TSQL simple "+" method.  
  • Inadequately defined help file: When searching through the Log Parser 2.2 help file, you'll find that not everything is adequately described.  For example, if you search for the term CONCATENATE, so as to find out what the concatenation syntax is, you'll find nothing.  This doesn't mean that the appropriate syntax hasn't been implemented.  It's just that the term CONCATENATE has not been included on the page that presents the Log Parser syntax for concatenation, which is STRCAT. In this case, I found the desired syntax for concatenation by performing searches on related terms, such as STRING, and then carefully reviewing the results.  It took awhile, but I eventually found the syntax I was looking for.  Be sure to perform multiple searches within the Log Parser 2.2 help file (it's a CHM file) when searching for the necessary term.