I'm one of the many .NET developers out there that neglects the enhancements in the framework. Not that I mean to, I just keep a running tally of things I need to catch up on, but rarely make the time to actually do any of them. In an effort to shame myself into taking care of a few of these things, I decided to dig into something I haven't spent any time trying to understand: the yield keyword, introduced in C# 2.0. I have to say, I was surprised at how simple it was... well, almost.
To attempt the obligatory textual description: yield, in conjunction with a return or break statement, tells the compiler that the code block should be treated as an iterator. This means the code block must return an instance of System.Collections.IEnumerable; but that will be almost completely hidden from you. All you need to do is "yield" each value within a loop. The compiler will wrap your code block and return each value as the enumerator is traversed.
There. Plain as day, right? Doubt it.
While reading about the feature, I was reminded about how crappy some help can be. I just wanted a code snippet to show me what I might do without the yield keyword and then what I'd do with it. Here's what you are probably writing today...
public List<User> GetUsers(IDataReader reader)
List<User> users = new List<User>();
User user = new User();
This is pretty basic stuff. Now, let's look at how you'd do it with the yield keyword...
public List<User> GetUsers(IDataReader reader)
User user = new User();
yield return user;
If you didn't catch it, we were able to get rid of the code that uses the List<User> instance. Sure, only 3 lines, but less code is typically better -- assuming we're not sacrificing readability. Those who're paying a little more attention probably noticed the fourth line that changed (well, technically, it was the first): the return type. Since yield only knows about IEnumerable (and IEnumerable<T>, by proxy), we have to change the return type to match that. I have to admit, I didn't like this. Using IEnumerable basically means I'm stuck with foreach blocks, which I hate using. This led me to investigating performance.
If you really want to know about the performance benefits of for vs. foreach, check out Joe Duffy's blog . Joe works on the PLINQ team and has a very nice post about perf considerations. From the limited tests I ran, I started to see horrid performance when using yield. Then, I reallized I probably needed to bump up my iterations to make it a bit more meaningful. Once I got into 10-50,000 iterations, I started seeing yield come out on top -- or, at least making it a better race. This goes along with what Joe talks about: you pay the cost of having the enumerator, which costs a lot, but will make up for it over the long haul, assuming you have a lot of iterations.
This isn't the whole story, tho. I ran this on a single core machine. Using a multi-core machine will produce better results. Why? Because yield is multi-threaded. What actually happens is, when you call a method that uses yield, it maintains a reference to that method. Then, your code will get an enumerator for it, typically via a foreach block. All this happens without actually touching your method. Within the foreach block, you actually reference the instance associated with the enumerator's location (i.e. users[i] in a for block). When you access the instance, that's when .NET actually digs into your method to get the next instance. The benefit of this is that you only process what you need to process. If you only need to loop thru 10 of the 1000 records, you only process 10, whereas all 1000 would be loaded into memory with the typical approach.
It's all a bit fuzzy until you play with it. I'd recommend creating a simple test to walk thru it yourself, if you really want to get a feel for it. It's as simple as debugging a test. As a matter of fact, here's a simple MSTest project that walks thru it. Hopefully, this helps you understand what's going on.
There are a number of commands in PowerShell that aren't as "quiet" as you may want them to be. Sometimes, there are parameters to supress output, but not always. Fortunately, we have Out-Null. While seemingly simple, this is a priceless cmdlet. I use it when writing scripts to keep the output clean.
Perhaps the simplest explanation of this cmdlet is to show a very common function, md, which creates a new directory. For the uninitiated, this function is available to support backwards-compatibility to DOS. You may have caught that I referred to this as a function and not an alias to a cmdlet. Based on that, if you want to see what the function is, simply use Get-Content.
PS C:\Flanakin> Get-Content function:md
param([string]$paths); New-Item -type directory -path $paths
As you can see, md simply makes a call to New-Item and tells it to create a directory with the specified path. Pretty simple. Here's what the output looks like.
PS C:\Flanakin> md noisy
Mode LastWriteTime Length Name
---- ------------- ------ ----
d---- 1/27/2009 2:51 PM noisy
I don't know about you, but that's a lot more than I really care to know. Oh, and note the 6 extra lines. Bleh! Luckily, Out-Null will save us.
PS C:\Flanakin> md quiet | Out-Null
That's it! You gotta love something so simple.
I was thinking about the Dec 31, 2008 debacle Zune went thru, where the devices didn't work for a 24 hr period. If you didn't hear about it, the problem was due to a device driver, which wasn't controlled by Microsoft. This is exactly the problem Microsoft has to deal with: crappy hardware vendors. I remember the sad, sad day I found out the Zune was built using Toshiba hardware. I have hoped so much that this would change, but it hasn't, yet... yet. I say that, not knowing of things to come, but hoping that Microsoft will realize the err in its ways. Microsoft should take tighter control over hardware by using quality hardware vendors. Hell, the Zune issue is nothing compared to the red ring of death issues the Xbox faces. I don't know anything about the Xbox hardware, tho, so I can't say much about that. Heck, Microsoft can't either, considering they haven't fixed the problem yet, as far as I know. I'd like to see Microsoft either form a division focused on delivering great hardware -- like phones, Zunes, Xboxes, desktops, and laptops -- or pony up and buy a company. There has been a lot of speculation to that effect with the purchase of Danger in early 2008, but Microsoft has claimed the "Zune Phone" won't happen. That doesn't stop the rumors from piling up, tho. All I can say is that, if my vote was worth anything, I'd be voting for Lenovo. I've purchased 2 and am about to get another. I've even thought about replacing my desktop with a Lenovo. What's even better, tho, is the idea of having a Lenovo phone. As much as I like my HTC Touch Pro (AT&T Fuze) -- minus the crap AT&T does to it, that is -- my love affair with Lenovo laptops really has me lusting after their new phone. If only it'd make it to the US... along with the HTC Touch HD, which I still want. All this really boils down to one question, in my mind: Will Microsoft reconsider a higher level of control after dealing with one problem after another from hardware vendors? I kind of doubt it, but I'll keep hope alive.
WPF and Silverlight have a daunting learning curve. There's no doubt about it. All we can do is take one bite at a time and, eventually, we'll finish the elephant. I've talked about my approach to today's WPF/Silverlight tooling and a good intro to routed events and commands, but there's so much more. I haven't talked about dependency properties, but that's probably the second concept you'll have to grasp (before events/commands) as you ramp up on the wide world of XAML. I'm going to skip over it, for now, because I've found something else worth noting: how the value of dependency properties are determined. Obviously, if you don't understand the concepts behind dependency properties, you'll need to brush up, first.
Traditionally, determining the value of a property is simple: you get it's value (referred to as "local value" in WPF/Silverlight). That's it. A call to person.FirstName would return the value stored in the private person._firstName field. If it hasn't been set, then the default value of that type would be used. Simple. If we need to inject any custom logic here, we typically start by doing so in the accessor. For instance, if we want to ensure the value cannot be empty, we add a check to the setter. Things can (and will) get more complex, tho. For instance, we might have a need to allow others to add in their own validation. This would traditionally be handled by an event handler with a Person.FirstNameChanged event. For better or worse, this is all custom and has a lot of room for "creativity." WPF seeks to standardize this and adds a bit of a framework around dependency properties to do so. Determining the value of a dependency property is accomplished in a five-step process.
For the most part, these are all pretty simple to understand. The first step is arguably the most complex, in my mind, because getting the base value isn't as simple as the aforementioned person.FirstName example.
1. Get the Base Value
Obviously, you need to know what value you're working with before you can proceed, but with features like templating and property inheritance, what the heck is the value!? In school, you had PEMDAS; in WPF you have... well, something a bit more detailed.
- Local value
- Style triggers
- Template triggers
- Style setters
- Theme style triggers
- Theme style setters
- Property value inheritance
- Default value
I'm not going to dig into each of these. I simply want to mention a few important aspects to keep in mind. First, "local value" refers to any call to DependencyObject.SetValue() (i.e. Height="123" in XAML or code or Canvas.Left="123" in XAML). The only other thing to concern yourself with, if you're a beginner to dependency properties, is the default value. Default values are not necessarily the same as that of the underlying type. For instance, FrameworkElement.Height has a default value of "NaN" (not a number), despite the fact that it's type, double, has a default of 0. Default types for dependency properties are set when the dependency property is registered.
If the value from step one derives from System.Windows.Expression, such as data bindings, WPF converts that to a real value. Pretty self-explanatory.
If the dependency property is currently being animated, the value we've retrieved/evaluated is pretty much ignored in favor of the value set by the animation.
Next is the injection of custom code via the CoerceValueCallback delegate, if one is registered. Being custom code, you're really left to your imagination on what you can and should do here, but one common scenario is to ensure the value is within expected bounds.
Lastly, we inject one last bit of custom code via the ValidateValueCallback delegate, if one is registered. Validation returns a simple true of false, so there's not much you can do if you made it this far with a bad value. If validation fails, an exception is thrown. For this reason, be sure you take advantage of both coercion and validation, if you have a specific domain you're working in.