| |
Silverlight has been Microsoft's golden child since v2 was released last year. The impact within the community has been astounding. Some demand the use of Silverlight without actually recognizing when and where the technology makes sense and others scoff at Silverlight either in favor of Flash or as a technology as "useless" as Flash. I roll my eyes every time I hear any of these three opinions... and they happen a lot. Flash went thru the hype cycle years ago and now it's Silverlight's turn. What I find amusing is that the hype seems to be much more powerful with Silverlight than it ever was with Flash. All we can do is fight the good fight.
Not every rich experience needs to be Silverlight. JavaScript frameworks are making life as a web developer easier and easier, so I'd recommend that always be the default choice. Unfortunately, most developers still find the pain of JavaScript development too great. While I'm a big fan of JavaScript, it is far from a perfect language and is severely lacking when it comes to development and debugging tools. Flash and Silverlight both simplify things with better tools and a single-platform vision that tremendously improves cross-browser development, but Flash is still lacking the one thing that makes Silverlight a no-brainer: XAML, backed by real programming languages.
XAML is immensely powerful and will continue to grow as more and more WPF features make it into Silverlight. XAML takes a new way of thinking, but it's well worth it for the simplicity and ease of development you get. But, more important than XAML is the fact that you have any .NET language you want and, with the inclusion of the Dynamic Language Runtime (DLR), there's virtually no reason not to use Silverlight. The one and only benefit Flash has is more mature tools. This is very important, but it is only a matter of time. Microsoft has both the will and the ability to overcome the current Flash tooling. The days of Flash are severely numbered. A look at the job market only confirms this. I just wish I had some of the same numbers from when Flash was initially released to compare the difference.
I truly believe that, if you're a web developer using any language, you need to take the time to understand how Silverlight can benefit you. Yes, HTML5 is coming, but the power and flexibility of environments like Silverlight will quickly surpass anything the W3C will ever be able to come out with a specification for. Heck, in less than 2 years, we've seen 3 releases of Silverlight and a beta version for the fourth, with speculations that Silverlight 4 is likely to release at Mix 2010, making it 4 full releases in 2.5 years. I'd like to see any one W3C spec ratified and fully released in all major browsers in such a timeframe. Such a feat is completely unheard of. Nevertheless, don't let me blab on about it. The numbers speak for themselves...


One thing I love about C# is that I'm constantly learning new ways to simplify and write less code. Perhaps PowerShell has a lot to do with this, but I put a lot of value in the power of the one-liner. With that, I wanted to share something small I recently discovered.
There are three main ways to initialize your read-only class properties: field initializer, constructor, or property accessor. The first thing you need to consider when determining the right approach is whether you should use a readonly or get-only variable. A readonly variable has two primary benefits: guaranteeing the value won't change and better ensuring thread-safety. I'm not going to go into either of these, but I will say, if you can make your variable readonly, do it. The main reason not to make your variable readonly is if its initialization is resource-heavy and the variable isn't always crucial. There are other things to consider, but I want to focus more on the implementation of this code rather than the reasoning behind deciding on a good approach.
Back in the .NET 2 days, I used the following approach to lazy loading.
private PersonCollection _people;
public PersonCollection People
{
get
{
if (this._people == null)
{
this._people = new PersonCollection();
}
return this._people;
}
}
One way to achieve a one-liner would be to use an inline if statement.
private PersonCollection _people;
public PersonCollection People
{
get { return ((this._people == null) ? (this._people = new PersonCollection()) : this._people); }
}
The main problem with this is there's more one-liner than simplicity. This is a common problem with the inline if statement. For this reason, I avoided this type of lazy load approach.
Recently, when hammering thru some code, typing == null just made me think about the ?? operator. For those that don't know, this is essentially a null-check included with .NET 2 to simplify the use of nullable types (Nullable<T>). If the value on the left is null, the value on the right is returned.
private PersonCollection _people;
public PersonCollection People
{
get { return this._people ?? (this._people = new PersonCollection()); }
}
Nothing revolutionary, but, as I mentioned before, I love my one-liners!

I can't tell you how many resumes I've read and interviews I've performed in the name of finding a quality SharePoint developer. After seeing my customer painstakingly struggle thru this same process, I finally decided to put together a couple short blurbs to cover what it is to be a SharePoint designer and a SharePoint developer.
I lump administration and design/customization together because I honestly believe you can't have one without the other -- at least to some extent -- but I'm obviously looking more for the latter than the former. Let me just say that, if I was building up a team to build SharePoint solutions, I'd want at least one of each of these types. Obviously, you'll want someone more focused on administration, if you're also doing operations work, but I'm more focused on building solutions than hostings.
SharePoint Administrator/Designer
Experienced SharePoint administrator with a strong emphasis on customization. Extensive experience with SharePoint Designer and InfoPath are a must, as is a moderate ability to create customized web parts using a mixture of HTML, CSS, JavaScript, and XSL (i.e. using a Data View Web Part). Should at least have an understanding of:
- IIS/SharePoint troubleshooting (i.e. event and ULS logs)
- How to customize branding
- SharePoint Designer workflows
- InfoPath forms
- Applicability and use of content types
- SharePoint web service interfaces
- Feature deployment
- Standard features (i.e. search/indexing, content management, and Shared Service Providers (SSPs))
- Enterprise features (i.e. Forms Server, Excel Calculation Services, and Business Data Catalog (BDC))
- Reporting and business intelligence (BI)
- Security concerns and audience targeting
Experience with PowerShell and ASP.NET development are a huge plus.
SharePoint Developer
Strong ASP.NET (C#) developer with experience building and deploying fully-automated SharePoint solutions. Must have an understanding of at least:
- Standard ASP.NET (including membership providers and their applicability to SharePoint)
- SharePoint object model and web service interfaces
- SharePoint feature packaging and deployment
- Web parts and web part connections
- SharePoint branding components
- Applicability and use of content types
- SharePoint-hosted workflows
- Standard features (i.e. search/indexing, content management, and Shared Service Providers (SSPs))
- Enterprise features (i.e. Forms Server, Excel Calculation Services, and Business Data Catalog (BDC))
- Reporting and business intelligence (BI)
Above all, developers are expected to "live" in Visual Studio, yet be ablet o identify when SharePoint Designer and/or InfoPath would be more pragmatic -- and follow through with such a solution.
Senior developers and software architects must have broad, hands-on experience across the entire software development lifecycle with formal engineering processes. Experience with defining and documenting an applicable taxonomy and governance plan is a must.
If you're interested in building SharePoint solutions, I highly recommend you find where you fit within these two descriptions. There's plenty of room to grow, but they cover the foundations I -- and many others -- look for when building out SharePoint teams.
Good luck and happy job hunting!

Ever try to install the Silverlight dev tools in an environment with "filtered" or even no access to the internet? If so, you've probably taken a look at the errors in the log and seen the following error:
Error from JobError Callback : hr= 0x80190193 Context=5 Description=HTTP status 403: The client does not have sufficient access rights to the requested server object. . Percentage downloaded = 0
For some tools, like .NET 3.5 and 3.5 SP1, there's an offline installer available on their respective download pages. Unfortunately, the Silverlight dev tools don't have that. I major oversight, if you ask me, but what can you do? Note that this was a problem with Silverlight 2 and 3. I haven't heard anything about a fix for the future, but I have to think they realize the work-around is ridiculous. Enough of my blabbering, tho...
- Download and save the Silverlight 3 Tools for VS 2008 SP1 to your desktop
- Create a new sltools directory on your desktop
- From a command prompt, run silverlight3_tools.exe /x to extract the files
NOTE: You can also use a tool like 7-zip to extract the files
- Specify the directory you created in step 2
- Download and save the Silverlight Developer Runtime to the sltools directory
NOTE: My problem was that the Sl Dev Runtime is at a blocked URL, so you may have to download it offline and bring it into your environment
- Run sltools\SPInstaller.exe to perform the actual installation
- Delete the sltools directory
That's it. Not hard, but annoying.
Tim Heuer has a similar blog post for Silverlight 2, but I wanted to share this because the developer runtime is in a new location. I'm still trying to figure out what the full URL is, so I'll update this if I get a chance to figure that out.
Update: Thanks to the Bob Pomeroy from the Silverlight team for helping me get the direct URL for the Silverlight 3 developer runtime.
When I need to create a geospatial visualization, Virtual Earth is my default answer. When talking to someone who's been in this space for a while, he mentioned the MapPoint Web Service. I initially assumed this was a legacy offering that Virtual Earth replaced. Apparently not. Tatham Oddie has a very nice high level comparison to at least help you determine which makes the most sense in a given situation.
|
Virtual Earth |
MapPoint Web Service |
Map Styles |
Road, Aerial, Birds eye |
Over 30 different styles (optimised schemes for night viewing, etc) however no aerial imagery |
Integration style |
JS control (best for embedding in web pages) |
SOAP web service (usable anywhere) |
Interface style |
Drag and drop positioning, scroll wheel support, interactive pushpins, AJAX based. |
Roll your own (it returns an image and you have to work out what to do with it). |
Pushpin support |
You create them all yourself on the fly using API calls – any clustering / filtering optimizations have to be done manually. |
Can upload pushpin sets to their databases and they will handle plotting / clustering and filtering. |
Routes |
Specify a start point and an end point and they’ll give you a route in text. End of story. |
Specify the waypoints, preferred road styles (back roads, highways, toll roads, non-toll roads) and it will return a machine readable result set. |
Cost |
Free (commercial use has some minor restrictions) |
Per transaction |
SDK documentation and support |
Basic MSDN docs, active community (www.viavirtualearth.com) |
Plenty of MSDN docs and articles, including VS.NET integrated help and plenty of websites (www.mp2kmag.com) |

Whether you've heard about Visual Studio 2010 and .NET 4.0 or not, you should really be watching the 10-4 show on Channel 9 . As you have probably guessed, the show talks about what to expect in Visual Studio 2010 (version 10.0) and .NET 4.0. The episodes I've seen cover things like ASP.NET, AJAX, parallelization, and overall enhancements to the VS IDE. Admittedly, I'm behind a few episodes, but that's just par for the course While each of these has been valuable on its own, I have to specifically call out episode 5, Code Focused in Visual Studio 2010. This episode talks about three things: code navigation, test-driven development (TDD), and extending the VS editor.
These first two areas, code navigation and TDD enhancements, are taking a page from the Resharper bible. If you haven't used Resharper, yet, you're seriously missing out. Resharper is the one VS add-in I can't live without -- GhostDoc isn't too far behind, tho. The first thing we're getting is the ability to highlight all references of an variable. This doesn't sound all that exciting, but it's really nice to see without having to look, if that makes sense. To top it off, you can bounce between these references with simple keyboard shortcuts.
Bouncing between variable instances is neat, but let's take it up a notch. If you're digging into new code, figuring everything out can be a true feat. To help us move down this path, VS10 is giving us the ability to view the hierarchy of calls related to a specific method/property. The call hierarchy tells you everthing that calls your code block and what your code block calls. This is going to make understanding code a lot easier. We're still short of my desired end-goal of having an automatic sequence diagram generated, but at least we're making steps in that direction.
From a productivity perspective, one thing I love about Resharper is that, if I need to open a file, I don't need to know where it is, I simply need to know its name. VS10 is bringing this to everyone. A simple shortcut, like Ctrl+T, and a dialog pops up, waiting for you to type in the file name. You can type a partial name, mycla to get MyClass.cs, or use the Pascal-casing and type MC to get MyClass.cs or MyComponent.cs. Pay attention to how much time you spend in the solution explorer. Imagine cutting that in half, if not more.
The TDD-based enhancement really isn't about TDD, but it does support TDD very nicely. Basically, the idea is, when you're writing code, you want to dig in to the real logic, not go around creating domain objects and data access layers. To support this, you just start typing. If you need a customer class, you just reference it in code. VS will tell you it doesn't know about that class, but this is where the feature comes into play: it'll give you the option to generate it. The same thing happens when you add properties and methods. VS will generate the stubs for you. This lets you focus on one method at a time, without having to divert focus to figure out how third party code needs to work. This is all about decreasing the noise, in my opinion, which is very hard to do sometimes.
The last thing the episode covers is something most people will probably underappreciate: the new WPF-based editor. Despite what people think, this isn't about flashy graphics. Nobody wants text to fly across the screen as we type it. There are two concepts here: (1) simple animation can go a long way to enhance user experience; and, (2) WinForms is now a legacy technology and WPF provides so many enhancements that it just makes sense to bring this to developers, making it easier to build and extend on the #1 development environment in the world. Everytime I think about this, I fall back on Resharper. Now that it's so easy to do amazing things with the editor, what is the Resharper team going to be able to give us? What is the community going to be able to give us? I can't wait to find out.

I'm one of the many .NET developers out there that neglects the enhancements in the framework. Not that I mean to, I just keep a running tally of things I need to catch up on, but rarely make the time to actually do any of them. In an effort to shame myself into taking care of a few of these things, I decided to dig into something I haven't spent any time trying to understand: the yield keyword, introduced in C# 2.0. I have to say, I was surprised at how simple it was... well, almost.
To attempt the obligatory textual description: yield, in conjunction with a return or break statement, tells the compiler that the code block should be treated as an iterator. This means the code block must return an instance of System.Collections.IEnumerable; but that will be almost completely hidden from you. All you need to do is "yield" each value within a loop. The compiler will wrap your code block and return each value as the enumerator is traversed.
There. Plain as day, right? Doubt it.
While reading about the feature, I was reminded about how crappy some help can be. I just wanted a code snippet to show me what I might do without the yield keyword and then what I'd do with it. Here's what you are probably writing today...
public List<User> GetUsers(IDataReader reader)
{
List<User> users = new List<User>();
while (reader.Read())
{
User user = new User();
users.Add(user);
}
return users;
}
This is pretty basic stuff. Now, let's look at how you'd do it with the yield keyword...
public List<User> GetUsers(IDataReader reader)
{
while (reader.Read())
{
User user = new User();
yield return user;
}
}
If you didn't catch it, we were able to get rid of the code that uses the List<User> instance. Sure, only 3 lines, but less code is typically better -- assuming we're not sacrificing readability. Those who're paying a little more attention probably noticed the fourth line that changed (well, technically, it was the first): the return type. Since yield only knows about IEnumerable (and IEnumerable<T>, by proxy), we have to change the return type to match that. I have to admit, I didn't like this. Using IEnumerable basically means I'm stuck with foreach blocks, which I hate using. This led me to investigating performance.
If you really want to know about the performance benefits of for vs. foreach, check out Joe Duffy's blog . Joe works on the PLINQ team and has a very nice post about perf considerations. From the limited tests I ran, I started to see horrid performance when using yield. Then, I reallized I probably needed to bump up my iterations to make it a bit more meaningful. Once I got into 10-50,000 iterations, I started seeing yield come out on top -- or, at least making it a better race. This goes along with what Joe talks about: you pay the cost of having the enumerator, which costs a lot, but will make up for it over the long haul, assuming you have a lot of iterations.
This isn't the whole story, tho. I ran this on a single core machine. Using a multi-core machine will produce better results. Why? Because yield is multi-threaded. What actually happens is, when you call a method that uses yield, it maintains a reference to that method. Then, your code will get an enumerator for it, typically via a foreach block. All this happens without actually touching your method. Within the foreach block, you actually reference the instance associated with the enumerator's location (i.e. users[i] in a for block). When you access the instance, that's when .NET actually digs into your method to get the next instance. The benefit of this is that you only process what you need to process. If you only need to loop thru 10 of the 1000 records, you only process 10, whereas all 1000 would be loaded into memory with the typical approach.
It's all a bit fuzzy until you play with it. I'd recommend creating a simple test to walk thru it yourself, if you really want to get a feel for it. It's as simple as debugging a test. As a matter of fact, here's a simple MSTest project that walks thru it. Hopefully, this helps you understand what's going on.

WPF and Silverlight have a daunting learning curve. There's no doubt about it. All we can do is take one bite at a time and, eventually, we'll finish the elephant. I've talked about my approach to today's WPF/Silverlight tooling and a good intro to routed events and commands, but there's so much more. I haven't talked about dependency properties, but that's probably the second concept you'll have to grasp (before events/commands) as you ramp up on the wide world of XAML. I'm going to skip over it, for now, because I've found something else worth noting: how the value of dependency properties are determined. Obviously, if you don't understand the concepts behind dependency properties, you'll need to brush up, first.
Traditionally, determining the value of a property is simple: you get it's value (referred to as "local value" in WPF/Silverlight). That's it. A call to person.FirstName would return the value stored in the private person._firstName field. If it hasn't been set, then the default value of that type would be used. Simple. If we need to inject any custom logic here, we typically start by doing so in the accessor. For instance, if we want to ensure the value cannot be empty, we add a check to the setter. Things can (and will) get more complex, tho. For instance, we might have a need to allow others to add in their own validation. This would traditionally be handled by an event handler with a Person.FirstNameChanged event. For better or worse, this is all custom and has a lot of room for "creativity." WPF seeks to standardize this and adds a bit of a framework around dependency properties to do so. Determining the value of a dependency property is accomplished in a five-step process.
- Get
- Evaluate
- Animate
- Coerce
- Validate
For the most part, these are all pretty simple to understand. The first step is arguably the most complex, in my mind, because getting the base value isn't as simple as the aforementioned person.FirstName example.
1. Get the Base Value
Obviously, you need to know what value you're working with before you can proceed, but with features like templating and property inheritance, what the heck is the value!? In school, you had PEMDAS; in WPF you have... well, something a bit more detailed.
- Local value
- Style triggers
- Template triggers
- Style setters
- Theme style triggers
- Theme style setters
- Property value inheritance
- Default value
I'm not going to dig into each of these. I simply want to mention a few important aspects to keep in mind. First, "local value" refers to any call to DependencyObject.SetValue() (i.e. Height="123" in XAML or code or Canvas.Left="123" in XAML). The only other thing to concern yourself with, if you're a beginner to dependency properties, is the default value. Default values are not necessarily the same as that of the underlying type. For instance, FrameworkElement.Height has a default value of "NaN" (not a number), despite the fact that it's type, double, has a default of 0. Default types for dependency properties are set when the dependency property is registered.
2. Evaluate
If the value from step one derives from System.Windows.Expression, such as data bindings, WPF converts that to a real value. Pretty self-explanatory.
3. Animate
If the dependency property is currently being animated, the value we've retrieved/evaluated is pretty much ignored in favor of the value set by the animation.
4. Coerce
Next is the injection of custom code via the CoerceValueCallback delegate, if one is registered. Being custom code, you're really left to your imagination on what you can and should do here, but one common scenario is to ensure the value is within expected bounds.
5. Validate
Lastly, we inject one last bit of custom code via the ValidateValueCallback delegate, if one is registered. Validation returns a simple true of false, so there's not much you can do if you made it this far with a bad value. If validation fails, an exception is thrown. For this reason, be sure you take advantage of both coercion and validation, if you have a specific domain you're working in.

JetBrains started work on ReSharper 4.5 recently; but, more importantly, they just made nightly builds available online. I've said it before and I'll say it again: this is the best Visual Studio add-in I've seen... and I'm not the only one who thinks so. I'm a huge productivity geek and ReSharper helps feed my addiction to speed... as in quick. JetBrains won't give you drugs. Although, ReSharper may seem like it, when you work on a machine without it. Development without ReSharper is somewhat analogous to drug withdrawals... cold sweats, lots of twitching, and wondering if you'll be able to finish in time. Okay, maybe not, but I do dread life without it.

Every year, there's one underlying theme that seems to be pushed in the technology arena more than anything. This year, I feel like it's the year of the cloud. The last time I did this was five years ago, so I'll have to back-fill a few years, but here are the themes I've noticed over the past 11 years.
- 2008: Year of the Cloud
- 2007: Year of User Experience
- 2006: Year of AJAX/Web 2.0
- 2005: Year of SaaS
- 2004: Year of Offshore Outsourcing
- 2003: Year of the Architect
- 2002: Year of Web Services
- 2001: Year of XML/.NET
- 2000: Year of Enterprise Java
- 1999: Year of Linux
- 1998: Year of the Web
We've been approaching "the year of the cloud" for a while, now. You can actually look back to 1998, when the web started to really catch on. A few years later, as Java started to build momentum and then .NET hit the scenes, which is when XML as a standard communication language started to catch on. Also tied to the .NET release was a huge push for web services. As this was more and more successful, service-oriented architecture (SOA) started to boom. In my mind, that was a big boon to the outsourcing trends, which have seemingly quieted down a bit, but not completely. SOA also led to the software as a service (SaaS) trend, which triggered Microsoft's software plus services (S+S) push, but that was more of a side story. With everything moving to the web, backed by [typically open] services, asynchronous Javascript and XML (AJAX) was the next big push. This was tied to the "Web 2.0" moniker, which I'd argue wasn't quite what Tim Berners-Lee intended. Either way, this led to the big push for better user experiences, which many people confuse with user interface design. The Web 2.0 push also kept the industry on its web focus, which is where we are left today.
It's easy to look back and see how we got here. Trends show that architectural changes typically take two or three years to gain momentum in the community, so we'll probably have a couple of years before the next major architecture peaks. The trend towards distributed computing has grown more and more, but I have a feeling things are going to start coming back a little. We've been pushing out to the web for a lot of reasons; one of which is the rise of the Mac. What we've been losing out on, however, is the power of the desktop. I see the S+S push to continue, but more as an underlying theme than a strong focus. Services will continue to be the foundation, maintaining the importance of cloud computing, but the desktop will be where the processing occurs. I see Silverlight proving a huge success, which will eventually bring .NET to the Mac. This will probably bring Novell and Microsoft a little closer together, with respect to Microsoft's relationship with Mono, but this may simply be a change in focus for Mono. Oh, and when I say, "bring .NET to the Mac," I'm not talking about the scaled-down version in Silverlight. I'm talking about the real deal. I see WPF and Silverlight merging along with the smart client architecture built into .NET today. This will take more than a few years, but it seems to be inevitable. Most likely, by the time all this happens, multi-core will be a way of life, as opposed to the we-should-be-thinking-about-threading thoughts most developers have today. Armed with a strong multi-threaded foundation, which is easy to use, the combined WPF/Silverlight presentation tier will quickly overtake Flash and Air. By this time, we should also start to see more integration into our everyday lives...
Okay, I'm probably getting a little out of hand here. If I go much further, we're going to be on the USS Enterprise, so I'll stop while I'm ahead. I'll just leave it at, it'll be interesting to see what's next. My money's on the power of the desktop, which we've lost over the past 10 years.
|
|
|