Pages

Thursday, September 5, 2013

The CRM Field Guide… Guide

You might not know it, because I’ve been silent on the matter, but I am a contributing author to The CRM Field Guide.  Specifically, I authored Chapter 24: Rapid Development Best Practices.  The intent of the chapter is to introduce people with backgrounds similar to mine to the practices I have developed over the years to produce code for CRM 2011, well, rapidly.  However, before I delve into some of the meat and value of the book or my chapter, I’d like to journal my experience of writing for the book.

It’s been stated before that the project was long-running, with several starts and stops along the way.  To the credit of Donna Edwards, Julie Yack, and Joy Garscadden, wrangling so many MVP authors was not a minor task.  As MVPs, we are all actively engaged in the Dynamics CRM community, while working day jobs.  These responsibilities conjointly leave precious little time for most other endeavors, family notwithstanding.

I was brought into the project later, and given the opportunity to write about my passion: development.  Because my exposure to CRM 2011 was light at the time, from a technical standpoint, and because many other authors covered a great deal of the technical components (solutions, customization strategies, etc.), I decided to focus on the tools and processes I use to handle development projects from start to end—evolved from my experience with CRM 4.

Deadlines were short, and yet I somehow ended up producing the single largest chapter, as measured in raw word counts.  It was amazing to write professionally, and the passion carried me quickly through my task.  To be a published author, for the first time, had been a dream as strong as my passion to become a professional software developer.  Marrying both experiences together is my crowning achievement.

However, I remain dissatisfied with my content—deserved or not.  It has received high praises from reviewers and developers who are introducing themselves to Dynamics CRM, yet I feel as though it could be better.  Perhaps, sometime soon, I’ll produce something stronger and (in my eyes) worthy of my audience.  Apart from this private embarrassment, I have been biding my time in writing up anything because many other authors have continued to do it on my behalf.  (Thanks!)

So why break the silence?  Because Jerry Weinstock, of “Jerry Weinstock” fame, has produced a Training Curriculum addendum, for the following roles:

  • CRM Admin
  • Power User
  • Business Analyst
  • IT Support
  • Developer
  • New User

He has expertly fleshed out approximately 10 to 15 chapters from the book that serve as reference material for each role, and stands as a fantastic appendix to a fantastically dense and useful book!  This curriculum is affectionately referred to (by me) as The CRM Field Guide Guide, and will help each role focus on their specific area of expertise or interest.

Jerry’s efforts will hopefully illustrate that the versatility of the book’s expansive content suites many purposes, and articulates its value in the library of any organization that works with Dynamics CRM.  Thank you, Jerry, for adding value to an immensely valuable resource.

Where can you obtain it?  Well, because the curriculum is only useful in the context of the book, it’s available as downloadable content (DLC) for all who purchase The CRM Field Guide, either in hard or digital copies.  To celebrate this addition, I will provide a discount code for the digital copy of the book to the first 10 people who tweet a mention of the book (crmfieldguide.com), and include me (@crmentropy) on it.  Look out for a Private Message from me!

Cheers!

Wednesday, August 28, 2013

Entity.GetAttributeValue<T> Explained

The CRM 2011 SDK offers a handful of useful extensions over the previous versions that many Dynamics CRM developers have come to appreciate.  One in particular that I’ve only recently come to use is the Entity.GetAttributeValue<T> method.  I’ve seen it used in many places, and have started using it myself, but I never really fully understood how it can be expected to behave.  The Microsoft documentation—as with a great deal of the SDK—doesn’t offer much on this particular method, and no example code.

I’ve found myself continuing to lean on Entity.Contains as a measure of safety, and checking the Entity indexer for type (using the “is” keyword), to make sure I wouldn’t violate a Type constraint.  My concerns have been alleviated today when I took a few minutes to run some scenarios through the method, to see what the results were.  There’s some good news I’d like to share.

The given key was not present

The only method parameter taken by Entity.GetAttributeValue<T> is the name of an attribute.  If that attribute is not present in the Entity instance, it will not generate an error.  Instead, the method will return a “default value” of Type T, or null if T is nullable.

Be careful of assuming this means that the Entity contains your attribute, however, because it may not.  Entity.Contains will assure that a null return from Entity.GetAttributeValue<T> means that the attribute is truly null, and not just effectively null because it is missing.

Nullable Types

The non-primitive types provided by the SDK are all nullable (e.g. OptionSetValue, Money), meaning returns of these types will be a complete instance of the Type, or null.  However, the primitive types the SDK uses are not inherently nullable (e.g. int, bool).

What happens if you pass a non-nullable type into the Type parameter T, and the value for requested attribute is, in fact, null?  Well, thankfully the SDK converts the null into a bitwise 0 value for T.  This is what I call the “default value”, and I’ll get to that in a moment.

If you prefer to always receive null, instead of the “default value”, then you can pass a nullable-extended, primitive Type (e.g. int?, bool?, decimal?), and the SDK will cast the requested attribute accordingly.

Default values

A “default value” is the result of a null return being coerced into a non-nullable type.  The null return could be either: a) a missing attribute, or b.) a null value on the attribute.  When converting null, the SDK selects a bitwise 0 value for your type.  Consult the chart below for the expected returns:

Type

Return
Numerical (int, decimal, double) 0
Boolean false
DateTime DateTime.MinValue
Guid Guid.Empty

This can be handy, if you are performing calculations on primitive types, and would rather not write several lines of code to weed out null values (since performing any mathematical operation on null results in null).

Specified cast is not valid

If, by some misfortune, you pass a Type to the T parameter that is not Metadata-compatible with the actual value of the attribute you requested, this method will thankfully throw an InvalidCastException, rather than returning a default value.  This assures you that you’re not mismatching Types.

For example, if an attribute is int and has an int value, but I pass ‘decimal’ to T, my code will throw an InvalidCastException.  Even though casting an int to a decimal is trivial for .Net, if my supplied Type does not match the Metadata schema, I will run into this exception.

Conclusion

So there you have it, a full rundown of how Entity.GetAttributeValue<T> works and how you can expect it to behave in your code.  Hopefully, you’ll realize like I did, that you can save yourself a lot of code by using this method and relying on its undocumented behaviors.  It was a fantastic add by the Dynamics CRM development team, and a hidden treasure for developers like me who became accustomed to writing all manner of attribute evaluations with previous versions of the product!

Friday, May 24, 2013

TypeScript, CRM 2011, and You

Those who know me closely know that I hold a certain disdain for JavaScript.  Not because I’m particularly bad at it (though some of my public and private code might put that to question), but because I believe I’m good at it.

One thing that has always bothered me is how difficult it is to manage data structure contracts (read: types), throughout a large amount of code.  To achieve application-level fidelity, you need a specialized library, that calls for specialized object factories—breaking away from all native syntax, and making useful documentation generation and structure a miserable chore.  At least, that’s what it’s been for me.

It didn’t have to be this way, but is.  Take, for example, the nearly useless “typeof” operator.  At some point, somebody on the JavaScript language development team thought that types could be important, and then promptly died before his thought could infect other web developers.

That’s what development comes down to for me:  data contracts.  Value or variable-based, I care not.  But, I want strong ones, flexible ones, abstracts, interfaces, inheritance, and predictability.  JavaScript developers either make their livings, or ruin their lives, by writing to edge-cases—and in JavaScript API development, it’s all edges, all the way down.  If strong types were possible, I speculate that you could cut the jQuery library, for example, in half. 

Whatever the shortcomings, they’re neither here nor there.  Because JavaScript only hit puberty in 2009, and it is still evolving.  Its ubiquity is undeniable, and its adoption and enrichment are accelerating.  Because of its slow maturity, it can pick and choose from more mature languages to implement advanced concepts.  This, of course, is driven from the myriad of developers that leave their comfort zones (perhaps reluctantly) to dive into this wishy-washy world of “nearly anything goes.”

Let’s face it: IDEs give JavaScript developers most of their power. IntelliSense’s own JavaScript support has made leaps and bounds in recent iterations.  Still, most of the enrichment and “type” inference comes from documentation comments, in either the VSDoc or JSDoc formats.  Its native introspection capabilities, however, are like Reflector on meth-addicted steroids.  At runtime, you can achieve things that make .Net bleed with envy.

The question, for me, has always been about bridging the gap: how do I posit stronger typing into my development, if not the language?  At first, I was drunk with prototypes and closures, and tried to produce my own typing/inheritance model.  Its quirks and desperate lack of syntactic approximation to stronger-typed languages left me to abandon it for something else.

In my search, I’ve encountered a couple of strong contendersDart is a promising project, and may go somewhere, but TypeScript appears syntactically closer to ECMA Script 6 (which will ultimately replace the current iteration of JavaScript).  The ECMA draft specification doesn’t appear to do much for type management, unfortunately—but that would be a major paradigm shift for JavaScript at this stage.  And honestly, I see TypeScript as the stepping stone to it.

The big reason I hope this, is that all valid JavaScript is inherently valid TypeScript.  Meaning that I don’t need an interop library to implement rich and mature JavaScript libraries, nor to I have to cross-compile them into some alternate representation.  However, TypeScript benefits strongly from “definition” files, which help provide type associations to improve the TypeScript experience of those libraries.  I’m proud to announce today, that I have helped with two such definitions for use with Dynamics CRM 2011:

  1. Definition file for the Microsoft Xrm JavaScript namespace: https://xrm2011typescript.codeplex.com/
  2. Definition file for the XrmServiceToolkit project:
    https://xrmservicetoolkit.codeplex.com/ 
    (you might have to get it from the Source: Scripts\typescript\XrmServiceToolkit.d.ts)

I use these two together often, coupled with internal definition files for jQuery, JSON, and other common libraries.  The stock definition files have done a great job, so these are the only two new ones that I’ve needed, so far.  (Though, now that I’m getting into KendoUI, I’d love to see definition files for it.)

This is all well and good, but how is it supposed to help you?  There are many good arguments surrounding the adoption of TypeScript, and other MVPs have addressed it in various venues.  Personally, I take the following benefits, over raw JavaScript development:

  • Types!  (If that wasn’t the first thing on this list, I should be soundly questioned.)
  • Compiles to JavaScript by basically removing itself from the code, and applying some cohesive closure structures to the results.  (This is why all valid JavaScript is valid TypeScript.)
  • C#’s interface and inheritance model!  (Multiple inheritance isn’t supported, but multiple interfaces are.  My C++ days are calling, and I let it go to voicemail.  I’m ok with this model.)
  • Multiple method signatures!  (This doesn’t apply anything on the “compile” to JavaScript—yet—but makes the IDE experience nicer.)
  • Generics!  (…coming in TypeScript 0.9)

These are all concepts I encounter daily in my C# coding, and it’s nice to finally have a way to express them in my JavaScript development, without getting in my own way, or doubling the amount of work necessary to apply it.  However, as John Petersen recently expressed in CODE Magazine, “TypeScript compiles to pure JavaScript but that isn’t a license for ignorance about how JavaScript works.”

So, if you want to start learning TypeScript, the best way to start is to do the following:

  1. (Optionally) Read the TypeScript language specification (but then, I’m into things like that)
  2. Install the TypeScript plugin for Visual Studio (build support)
  3. Install the Web Essentials add-in for Visual Studio (IntelliSense support—very basic, currently, but will identify errors at design-time)
  4. Follow the Quick Start guide for TypeScript (to learn syntax basics)

I find the following tweaks to Visual Studio’s options handy:

  • Text Editor\TypeScript\Project: Uncheck all “Compile on Save” options; These collide with Web Essentials, and I prefer to use the “Build” action anyway.
  • Web Essentials\TypeScript
    • “Compile to EcmaScript 3”:  False
    • “Add generated files to project”:  True
    • “Compile all TypeScript files on build”:  True
    • “Compile TypeScript on save”:  False (But, if you want to, use this option, instead of the one from the TypeScript plugin—for now.)
    • “Show preview window”:  False (This one just gets in my way, and is only really useful if you set the “compile on save” option.)

Then, you can use reference tags in your own TypeScript files, to include the definition files from above and start enjoying TypeScript in CRM 2011 development:

/// <reference path="definitions/Xrm2011.d.ts" />
/// <reference path="definitions/XrmServiceToolkit.d.ts" />

I hope you’ll be as interested in TypeScript as I am, because it’s a fantastic, open-source initiative from Microsoft—and it’ll prepare you for the next generation of JavaScript interpreters.

Thursday, April 11, 2013

Love Your UI: Icons for CRM

I’m going to make an unusual break from my normal kind of post to talk about customizing Dynamics CRM with paid utilities, and specifically about sourcing professional-appearing icons for CRM’s UI.  The topic doesn’t occur very often in the forums, and generally the advice has been to search Google.

I can’t discount that method, as I’ve used it in the past to locate royalty-free, attribution-free, and open-licensed sets of icons.  While the quality of many sets are great, the bulk of files are cumbersome to manage, and customizing them requires a significant investment of time and energy.  Looking into “modern” Microsoft interfaces with flattened icons, there are very few free options that match this style and look good doing it.  (However, if you’re looking for a good compilation of options, look no further than Chris Coyier.)

What an illuminating experience working with Axialis IconWorkshop has been!  I inquired about the product about 6 months back, and was very graciously granted a gratis license to their full “Pure Flat” icon sets.  I’ve had a handful of opportunities to use them, since, in conjunction with the IconWorkshop to author and customize the results.  Here’s a breakdown comparison of my previous “Google discovery” experience, and using a professional tool:

Locating an Icon

Using Google:

Generally, I don’t use Google to find a single icon.  There is a significant amount of danger for violating copyright and intellectual-property rights.  Icon authors are hard working people too, and icon theft is one of the unspoken undercurrents of web applications, due in part to lazy people like I used to be.

So, the last few times that I used Google, I went straight for “open-source, royalty-free, copyright-free” icon libraries.  There are many, but obtaining them from reputable sites can take some work.  Then, they are bundled into zip files (typically), and generally contain thousands or tens-of-thousands of icon files.  Filenames are the primary descriptors to search on, so if nothing turns up for a basic term, trying variations… or at worst, scanning through thumbnails, are best bets.

Experience Ratings (1 best – 5 worst)  
Time Consumption 4
Skill Required 2
Efficiency of Desired Outcome 5

Finding icons that are legal to use can be a struggle, and sorting and managing the various packing and naming conventions often leave much to be desired when it comes to cataloging or describing collections.

Using IconWorkshop:

Searching through icons that are imported into Axialis Librarian is a fast process, and only made faster, I think, due to indexing of the files.  This indexing extends to metadata keywords, but unfortunately the Axialis icon sets don’t come preloaded with any (at least not by my sampling).  Adding your own keywords takes time, but can certainly help improve it.  For the basis of rating this experience, though, I will not consider it an advantage.

When it comes to the image you want to use, Axialis has many libraries with an impressive number of icons, but they don’t yet have a “full set” purchase experience—unless you use their in-site contact form to inquire about it.  Their prices are fair for long-term use, and more importantly, they include “base” icon images and “overlay” images that can be easily combined to create new permutations easily.  Searches will generally turn up both, however variety is going to cost you.  That said, it’s generally easy to figure out which set likely contains the candidate icon you want.

Thankfully, you can import assets from other libraries (especially any “free” sets you may already have), and the IconWorkshop can be useful for searching those, as well.  Depending on how you look at it, Axialis icon sets are not given first-class status over other libraries--and that’s either noble, or a lost opportunity.

Experience Ratings (1 best – 5 worst)  
Time Consumption 4
Skill Required 2
Efficiency of Desired Outcome 4

While IconWorkshop helps with searching and organizing, Axialis’ icon sets are hamstringed by lack of useful keyword metadata accompanying their files.  They could have taken a “2” or “1” rating in the efficiency department, and probably lower in other areas.  However, the improved organization and the search capabilities maintain a slight edge over searching through the file system with something like Agent Ransack.

Icon Set Quality

Using Google:

“Free” icon sets come in varying styles and quality, so it’s hard to judge them collectively.  Often, it’s difficult to find a icon that comes in several native sizes.  Most “free” sets offer one or two sizes, and are often capped at 32x32, so scaling up or down impacts quality by producing pixelated or blurry results, respectively.  Within Dynamics CRM, 32x32 and 16x16 are used throughout the ribbons, grids, and menus; however, custom controls and pages can benefit from larger or smaller icons.  I have often found myself repeating the searching phase to find several icons that closely match each other in the various sizes.

File formats are another issue, although generally minor given a good image editor.  Sets generally come in one or two formats, and they may or may not implement transparency.  It becomes important to check and convert, where necessary, to meet your needs.  (JPEG and GIF to PNG, for example.  Maybe that’s only my need, so your mileage may vary.)

Experience Ratings (1 best – 5 worst)  
Time Consumption 3
Skill Required 3
Efficiency of Desired Outcome 3

Across the board, for most cases, if you find an icon you want, and are either lucky enough to have it looking good in every size you need it, or content with visual scaling effects, this is not a bad option.  In my experience, however, it tends to be fairly mediocre.

Using Axialis Icons:

The best thing I can say:  256x256 all the way down to 16x16 of hand-crafted icon goodness.  Each set comes in ICO, BMP, and PNG formats, which covers the Web and Windows spectrum nicely.  On top of this, overlays are separated into their own files with transparency masks as companions.  These only factor into the ratings of this category insofar as they also come in native resolutions that are clean, well-scaled, and visually appealing at all sizes; and also that Axialis has pre-combined many “obvious” overlay and base image permutations.

Experience Ratings (1 best – 5 worst)  
Time Consumption 1
Skill Required 1
Efficiency of Desired Outcome 2

Having pre-built ranges and formats adds a tremendous amount of space to the icon libraries, but the convenience of always having a size and format that works without additional thought is hard to trade away after experiencing it.  The quality and appearance of each image is strikingly good.  However, I do wish the files had an SVG or font-based format—I haven’t needed those for CRM yet, so that doesn’t affect my rating.

Customizing an Icon

Using GIMP:

I’m not going to throw Google under the bus in the image editing department.  There are lots of icon editors available, and image manipulation suites.  My personal favorite is GIMP.  It has many of the features I need for advanced image editing, and it’s open source.  After 10 years, I’m fairly comfortable performing a wide variety of tasks.

Unfortunately, small images don’t feel comfortable in suites meant for larger ones, but it works.  Layers are especially handy for putting together transparency masks and overlays… but overlays are uncommon to find in free icon sets, so most overlays I’ve used were custom produced, adding a lot of time for clean results.

GIMP’s main advantage is its tremendous image editing capabilities—professional caliber features.  However, its learning curve is equally tremendous, and I never quite found the time to automate some repeated tasks.  It does, however, produce superior quality.

Experience Ratings (1 best – 5 worst)  
Time Consumption 5
Skill Required 4
Efficiency of Desired Outcome 4

Unfortunately, GIMP really just helps me “limp” with free icons, helping me cleanup scaling quirks, add custom overlays, or modify palettes.  The time investment is not insignificant, though.  I just didn’t realize it could be so much better.

Using IconWorkshop and Axialis Icons:

IconWorkshop has a feature that will compile and combine in all permutations, the overlays and base images you specify, to automatically produce an array of sizes and decorations without additional effort.  This is one of the faster ways to simply knock-out the 32x32 and 16x16 sizes I desire for CRM 2011. 

The image editing capabilities of IconWorkshop are lackluster, and just advanced enough to satisfy the needs of basic manipulation.  Thankfully, I find myself working within their base+overlay formulas well enough that I haven’t had to step outside of IconWorkshop for anything more advanced.  It’s a borderline comfort, but it fits well for the purpose.

It’s obvious that combining images is IconWorkshop’s strong suit, and that’s why Axialis’ icon sets are amazing within it.  The sets can stand alone, and IconWorkshop can do its deal with any source, but together they offer a purpose-built system that streamlines the whole process of tailoring icons—if you require it.  Again, Axialis has taken the liberty of combining common base and overlay permutations and included them in their large icon set files, so that reduces the need for customizing in the first place (or simplifies recombinant decoration).

Experience Ratings (1 best – 5 worst)  
Time Consumption 2
Skill Required 3
Efficiency of Desired Outcome 2

By using a simpler tool and products that are built to work with it in an optimized fashion, I have shaved a lot of time from the process of building a slick-looking, custom UI within Dynamics CRM.

Average Scores:

 

“Free” Axialis
Time Consumption 4 2.33
Skill Required 3 2
Efficiency of Desired Outcome 4 2.66

I’m starting to understand the adage: “It’s not nobler to do by hand, what can be better and faster done with a tool, when lunch is on the line.”

Thursday, February 28, 2013

High Availability Workflows

Since joining Avtex, I have been able to expand my horizons and gain exposure to customers with unique needs.  I try, as hard as possible, to incorporate or build on top of CRM’s out-of-box experience, and refrain from writing code I don’t have to.  To that end, I’d like to share a simple solution to making Workflows trigger while they’re deactivated for updating.

It’s no secret that business processes change on-the-fly.  Implementing changes to active Workflows can be tricky, from an availability standpoint.  Most companies adopt a routine of modifying Workflow designs afterhours, or with operations momentarily held until the modification is complete.  This presents a dynamic and potentially troubling hurdle for “always on” companies.

Because Workflows are listeners to CRM operations, rather than direct participants, any downtime with a particular Workflow means that it’s no longer listening to events.  This allows for the potential of unapplied business logic, and can be very difficult to diagnose or troubleshoot.

Though the space of downtime can be reduced to mere minutes—by developing in an alternate environment and shipping the updated Workflow in as a Solution—the window of opportunity for actions to tip-toe past a disabled Workflow still exists.  For some companies, this is simply unacceptable.

However, you can use the out-of-box Workflow abilities to create high availability Workflows that can be taken offline, modified, and then reactivated, all without missing a single event that was triggered while the Workflow was offline.  This works by splitting the Workflow’s functionality into two separate Worfklows:

  1. An event listening “Dispatcher” Workflow; and
  2. A “Business Logic” application Workflow

By isolating the business logic into a “child Workflow” which is called by its corresponding Dispatcher, one can take the Business Logic offline, while leaving the Dispatcher functional.  This allows the configured triggers of the Dispatcher to operate continuously, though the step which calls the Business Logic counterpart will fail during the downtime.

Though the Dispatcher jobs enter a “Waiting” state, they will be easy to identify (especially if you allow them to delete themselves when they’re successful) in order to resume.  This behavior is generally sufficient enough to allow a wider window for Business Logic adjustment, without requiring additional intervention to process the new logic against records that are awaiting to execute the new logic.  That brings up another excellent advantage to this pattern:

With Dispatchers, you can immediately terminate existing logic and immediately register all further processing against future logic.

Note:  You cannot retarget a different workflow, as that would require taking the Dispatcher offline—which defeats the purpose.  The System Job acts as a cloned instance of a Workflow, so the Dispatcher will always target a specific, business-logic Workflow.  You can approximate a retargeting scenario with Dispatcher juggling, but it would involve trigger overlapping mitigation.

Here’s an example scenario that uses a Dispatcher to update an Account using the Dispatcher and Business Logic pattern:

First, create the Business Logic workflow, and for “Available to Run” select “As a child process”.  Remove all selections from “Options for Automatic Processes”.

image

Then, create the Dispatcher workflow with the “Options for Automatic Processes” setting you desire, and configure it to call your Business Logic workflow.

image

You may now activate both.  Your Dispatcher is diligently watching the events, and the Business Logic is processing your rules.

Here is what happens when you deactivate the Business Logic workflow to make modifications:

image

My example uses a Dispatcher that listens to Account creation, so when I create a new account, here is what I see in the “Workflows” associated to it:

image

As you can see, the Dispatcher caught the event, and then entered a “Waiting” state.  If we examine the job, we can see the error:

image

It failed on the step that calls my Business Logic.  This job will remain in this state until I resume it.  After completing my modifications to Business Logic, I’ll reactivate it.  Then, I need to identify all my outstanding Dispatcher jobs:

image

Then, resume them with the confidence that I have missed no important triggers while my Business Logic was momentarily offline:

image

That said, I always perform a quick validation, just to be sure:

image

Monday, February 11, 2013

A Potent Cocktail: ExecuteMultiple and LINQ

I love it when technologies using the same framework marry together like peaches and cream.  Today, I want to cover the intersection of CRM 2011’s new ExecuteMultiple capabilities and my love of LINQ.  Just to be clear, I’m not talking about using LINQ provider for CRM, though you can certainly use that to produce a collection of records upon which to perform some operation in bulk.

Instead, I’d like to show you an elegant snippet of code that demonstrates the power of ExecuteMultiple with the cleanliness of succinct LINQ.  Given an EntityCollection —someRecords— suppose that you need to increment some integer —my_integer— on each of the contained records.  For the purposes of this example, I’ll be using the late-bound Entity type.

// Start our request with some basic initialization
ExecuteMultipleRequest bulkIncrementRequest = new ExecuteMultipleRequest()
{
    Settings = new ExecuteMultipleSettings()
    {
        ContinueOnError = true,
        ReturnResponses = true
    },
    Requests = new OrganizationRequestCollection()
}

// Compile the collection of requests
bulkIncrementRequest.Requests.AddRange( from record in someRecords.Entities
                                        where record.Contains( "my_integer" )
                                        select new UpdateRequest() 
                                        { 
                                            Entity = new Entity( "someRecord" )
                                            {
                                                Id = record.Id,
                                                Attributes = {
                                                    new KeyValuePair( "my_integer",
                                                        ( ( Int32 ) record[ "my_integer" ] ) + 1 )
                                                }
                                            }
                                        } );

// Excute the requests
ExecuteMultipleResponse bulkIncrementResponse = ( ExecuteMultipleResponse ) service.Execute( bulkIncrementRequest );

// Check "IsFaulted" to determine if any of the submitted requests failed
if ( bulkIncrementResponse.IsFaulted )
{
    Int32 errorCount = ( from incrementResponse in bulkIncrementResponse.Responses
                         where incrementResponse.Fault != null
                         select incrementResponse ).Count();
}

By using LINQ to inject the AddRange method of Requests, I was able to compound the code that loops through each record, selects the original value, increments it, and produces a request to update the record. 

Keen observers notice that I created a new Entity object from the old one; this is a best-practice to avoid triggering updates on attributes undesirably.  However, it also allows me to perform the increment operation inline.  I’m sure you could inject this operation into a secondary where statement, but I think that makes the query logic less readable.  But your mileage may vary.  :)

Wednesday, February 6, 2013

Add Parameters to CRM 2011 Lookup Dialog

Alternate Title: How to hide the “New” or “Properties” button from the CRM 2011 Lookup Dialog.

Following on the heels of yesterday’s post, I have finally discovered a way to eliminate the pesky “New” and “Properties” buttons from the Lookup dialog.  This was, again, easier accomplished in the previous version of CRM, with the following code:

   1:  crmForm.all.<lookup>.AddParam("ShowNewButton", 0);

As we’ve seen before, this function did not disappear from CRM 2011—it was simply moved.  Now, this function is invoked somewhat implicitly via a behavioral association to a highly organized JavaScript object structure.  Yet again, hours of pouring over Developer Tools in Internet Explorer (this time with a lot more Profiling and debugging), I have figured out how Microsoft does it.

Here’s the new way to hide the “New” button:

   1:  var lookupControl = Sys.Application.findComponent("some_lookupid");
   2:   
   3:  if (lookupControl != null)
   4:  {
   5:      lookupControl._element._behaviors[0].AddParam("ShowNewButton", 0);
   6:  }

Like most of the things on this blog, this is highly unsupported, but I personally believe that this is a harmless hack.  There is one caveat, however, to the above code:

  • The ‘_behaviors’ member is a collection of references to classes.  For every Lookup I could find, there was only one entry, and it exposed the “AddParam” function.  Conceivably, there could be other Lookups with multiple behaviors, and the first item in ‘_behaviors’ may not be the one you want.  You have been warned.

Tuesday, February 5, 2013

Custom CRM 2011 Form Notifications for UR12

[UPDATE: 2013.04.12 I added some additional tricks to this code, at the bottom.]

Before Update Rollup 12, it was relatively simple to use the original “form alert” hack (seen here, here, here, and here) to produce custom, inline alerts and notices for the end user.  It’s a great feature of the form, and I wish I knew why using it is unsupported.

Alas, many realized that these customizations would be undone by Update Rollup 12, and indeed they have.  So, allow me to show you what appears to be the “Microsoft” way of accessing the new form notification system.  The added bonus is that this method requires no additional libraries or external references, and should be cross-browser.  (Disclaimer: the following information was not released or documented by Microsoft; I discovered it after a few hours of pouring over Developer Tools in IE10.)

The original hack could never have been cross-browser, because it relied on the “htc” behavior file which backed the original “crmNotifications” element.  Fortunately, these functions haven’t changed… just moved to a new home.  Here’s the old way (pre-UR12):

var notificationsArea = document.getElementById('crmNotifications');

notificationsArea.AddNotification('noteId1', 1, 'namespace', 'Message.');

And here’s the new way (post-UR12):

var notificationsList = Sys.Application.findComponent('crmNotifications');

notificationsList.AddNotification('noteId1', 1, 'namespace', 'Message.');

Both examples do the same thing in their respective CRM 2011 revisions.  This customization remains as unsupported as it ever was; however there is relatively little danger in using it.

Here are some additional tricks you can use:

notificationList.SetNotifications();

That will reset the notifications array with an empty set.  Also, you can hide the notifications area by using:
notificationList.SetVisible(false);