Feeds:
Posts
Comments

In case you haven’t heard, Google just debuted Google Chrome Frame; allowing developers to run the Google Chrome rendering engine inside IE as a plugin and thereby trumping IE.

Reading blogs like this one, I distinctly get the feeling that most of the tech journalists out there don’t understand what this is really all about. This is not about Google being annoyed with old versions of IE, it’s about HTML5 vs. Silverlight vs. Flash. Right now HTML5 has 3 big disadvantages over their plugin counterparts…

  1. Different browsers will implement HTML5 differently. Even if it is standards based. 
  2. Most browsers don’t support HTML5 and getting a user to switch browsers is a lot harder than getting them to install a plugin.
  3. Standards based innovation is slow. HTML5 in its entirety should be ready for browsers to implement in 2012; just in time to be as far behind Flash and Silverlight as HTML4 is today.

In the end, there are only 2 ways around all of these disadvantages:

  1. Either get everyone to use your browser or
  2. Run everything thru a plugin.

Google Chrome (the browser) and Google Chrome Frame together make it possible for developers to write and test code in an IE-less world, thereby getting more developers to care about Chrome, less to care about IE, and more to choose HTML5 as a framework to develop apps on.

Right now the plugin is an ActiveX plugin and therefore only works on IE, but give Google some time and I’m sure they’ll have a Netscape plugin too that will work in Safari and Firefox. It’s only a matter of time before you Google will be nearly in complete control over how HTML5 will be rendered (if the developer chooses so) and thereby won’t need to wait on a standards committe to approve further innovation to HTML.

Of course, the picture is even bigger than that… whoever owns the developers will own the dominant OS. In my opinion, this is all a tactic in the Android/Chrome OS vs. iApple vs. Windows war and the reason Google built Chrome in the first place.

This week Microsoft finally revealed its pricing structure for Windows Azure hosting services. Using Azure to host the simplest website in the world costs a minimum of $0.12 / hour. Work out the math: 0.12 * 24 * 30 = $86.40 / month.

While this might sound reasonable to a large organization with tons of traffic or anyone currently using Amazon EC2 or Rackspace’s Mosso, this is way out of reach for the majority of developers and organizations who are just trying to create a useful webservice or website that could scale in the off chance their idea took off or got mentioned by the press.

Based on this pricing it’s obvious that Microsoft is trying to compete with Amazon and targetting the same market. Nevertheless, I personally had high hopes that Microsoft was actually trying to compete with Google App Engine by offering the first and only affordable and scalable Windows hosting option… which raises the point: (in case anyone from Microsoft is listening) if Microsoft wants .NET to compete long-term as a server-side platform (which is essential for Windows to thrive as a server-side OS), someone is going to have to solve this problem soon or it will find itself playing catch-up.

I love Windows Azure and I believe it is a great, simple and affordable option for the big boys. But as Windows Azure leaves beta and the world says hello, I say goodbye before I have to start coughing up ~$100/mo for my personal websites. Back to shared hosting at GoDaddy ($4/mo for Windows + SQL).

Silverlight 3 introduces the WriteableBitmap class and with it, the ability to crop an image programmatically on the client!

All you need to do is create a new instance of the WritableBitmap class as your destination (supplying the dimensions in the constructor). Then create or acquire another instance of the WriteableBitmap class fully loaded with an image (among other ways, you can do this by creating a BitmapSource object from a stream via .SetSource and passing that BitmapSource instance into the constructor of a new WriteableBitmap)

Once you have your source and destination WriteableBitmap classes, just retrieve one pixel at a time from the source instance and set that pixel on a destination instance. The pixels are stored in a property on the object call Pixels which is a 1 dimensional array. Geting the index of a given pixel in an array is simple: index = x + y * width.

In my first pass, I just looped thru pixel by pixel. This was fast but not as fast as using Array.Copy (almost twice as fast)…

private static WriteableBitmap CropImage(WriteableBitmap Source, int XOffset, int YOffset, int Width, int Height)

{

    int SourceWidth = Source.PixelWidth;

    WriteableBitmap Result = new WriteableBitmap(Width, Height);

    for (int y = 0; y <= Height – 1; y++)

    {

        int SourceIndex = XOffset + (YOffset + y) * SourceWidth;

        int DestIndex = y * Width;

 

        Array.Copy(Source.Pixels, SourceIndex, Result.Pixels, DestIndex, Width);

    }

    return Result;

}

The result (a WriteableBitmap) can then simply be used as the source of an Image control to display your cropped image.

Note: WriteableBitmap.PixelWidth is expensive. Be sure to call it only once if possible.

Possible Improvement: Had the source width and destination width been the same I could have done it in a single call to Array.Copy and presumably made it even faster.

It’s high time that web developers are able to do complicated tasks on the client and not forced to use the server just because the client-side platform doesn’t support it. We’re still not all the way there yet but Silverlight gets us a lot closer than anything before it.

How many people have Silverlight installed? According to Rich Internet Application Statistics, Silverlight is installed on 30.27% of the machines out there. However, I’ve been tracking stats on my own site: MyPadlock.com (a free password manager for Windows) and have seen a much different number. Here are my results from the previous month:

SLStats

Find out more about tracking Silverlight usage over Google Analytics.

Caveats: There are a number of differences that could account for why I’m seeing a larger percentage than the Rich Internet Application Site. First of all, I doubt I have the same volume of traffic being tracked. However, I do have a fairly healthy volume, most of my visitors are not repeat, and the numbers have shown relative consistency from month to month, so think what you like, but I’m going to rule this out as a relevant factor.

Also note that my site does not use Silverlight in anyway nor does it receive any real traffic from a source that uses Silverlight. Oh, and my own visits are not counted either.

Most likely, the difference is due to differences in audiences. Certainly, a password manager is the kind of software that power users would use for more than novice users. Also, power users are probably more apt to explore new corners of the web are therefore more likely to have encountered Silverlight at some point. And lastly, my site gives away a Windows only app and perhaps Windows users are more likely to have Silverlight installed (might be a reach but worth speculating).

Nevertheless, whatever the reason is, I still belive the statistic is valuable and at the very least, tells me what to loosely expect for anyone thinking of porting a Windows app to Silverlight.

What kind of numbers are you seeing? I’d love to hear from anyone with access to similar stats for their own site(s).

I just finished one of the coolest and most exciting apps I’ve ever written and the client-side was done in 100% native Silverlight 2. Fortunately, I was able to get it done just in time to debut for the NewCloudApp Windows Azure contest.

It’s an online jigsaw puzzle and it was as fun to write as it is to play. You can choose from over a hundred images or use your own photo, select practically any number of pieces (I recommend 12 to start), and even send puzzles to your friends.

Click the link below to check it out and tell your friends if you like it!

PuzzleTouch Online Jigsaw Puzzles

I can’t wait to write more about why I chose Silverlight for this project (really, why Silverlight was the only platform up for the job) and all the new things I learned along the way. But for now, go forth and play puzzles.

Here’s a tip to get the better performance when sending xml to a WCF web service from Silverlight or WPF…

Never send or return xml as a string. Anytime you pass around a string data type as a parameter to your service or send back a string return value, the string will be XML encoded. Imagine, the following string:

<hello/> (8 bytes)

this is really sent around as:

&lt;hello/&gt; (14 bytes)

Something that should have taken only 8 bytes of your bandwidth, took almost double that. Now an extra 6 bytes isn’t bad, but you can see how it wouldn’t take long for your xml to quickly inflate the overall size of the request or response and at some point even have a noticable delay for the usability of your app.

Fortunately, the solution is simple! Use an System.Linq.Xml.XElement object instead. By doing so, WCF is smart enough to encode your xml right in the message envelope as pure xml. Furthermore, if you need to work with that xml on the receiving end, it’s already in an object type more suitable for most purposes than a String.

To demonstrate and prove what is happening, I wrote a test app that sent a test parameter to a service as a String, XElement and byte array. The test data being sent in all three cases was the xml: <test/>. Then I used fiddler2 web debugging proxy to see what was actually sent to the server. Check out below what I saw:

Sending a String:

<s:Body><DoWorkString><param1>&lt;test/&gt;</param1></DoWorkString></s:Body>

Sending a byte array:

<s:Body><DoWorkByteArray><param1>PHRlc3QvPg==</param1></DoWorkByteArray></s:Body>

Sending an XElement:

<s:Body><DoWorkXElement><param1><test /></param1></DoWorkXElement></s:Body>

XElement wins! 🙂 And just to be sure, I also confirmed that responses for the three data types produced identical results.

 

A note about compression

Compression does not completely remove the benefit of sending as XElement. Besides the fact that server compression only works on responses, it doesn’t eliminate the benefit from sending xml without encoding. This was surprising to me. I thought that compressing my responses on the server would find common xml encoding phrases like “&lt;” and “&gt;” and find a way to turn them into single bytes using a mapping technique and make an xml encoded and non-xml encoded response virtually identical in size when compressed. To test my assumption, I ran a test where I took a big xml file, and added it and an encoded version of it in a zip file. Here was my result:

encoded and decoded compression results

Acording Although encoded xml can be compressed at a slightly higher compression ratio, it was not as dramatic as I thought and suggests and the final compressed sizes show that although compression on the server helps reduce the size of your response a great deal, xml encoded strings will still be larger than necessary. Check out my previous blog post to find out more about opimizing responses by turning on server compression.

Keep your Silverlight app running fast by compressing your service responses. Imagine you’re downloading 1MB worth of text. Compressed, that same text can usually be reduced to under 200K. This reduction can be significant enough to be noticable to even clients on good internet connections and over time will save you money on bandwidth usage.

Fortunately, there’s NO need to find a 3rd party zip component or to try to do it yourself. IIS has everything all built right in, you just need to enable it.

The way it works is, the browser sends up an http header of “Accept-Encoding” with gzip and/or deflate as the value with each request to your WCF service. As long as IIS is configured correctly, the server will automatically compress the response from the service and the client will automatically decompress it before your code enters the picture. Not a single line of code is required on our part to take full advantage of this built in compression feature that works across most major browsers.

To set up IIS6 to participate you need to (sorry, haven’t tried this on IIS7):

1) In the IIS console, right click on “Web Sites”, choose properties, select the Services tab and check “Compress application files”

iis

2) Also in the IIS console, go to the Web Service Extensions folder and click the “Add a new Web service extension” link. In the dialog that appears, enter a name for the extension. I named mine “gzip”. Next, enter the path of the dll capable of zipping the responses (c:\windows\system32\inetsrv\gzip.dll), and check “Set extension status to Allowed”.

extension1

3) Run the following command lines to update metabase.xml:

CSCRIPT.EXE ADSUTIL.VBS SET W3Svc/Filters/Compression/GZIP/HcScriptFileExtensions “asp” “dll” “exe” “svc”
CSCRIPT.EXE ADSUTIL.VBS SET W3Svc/Filters/Compression/DEFLATE/HcScriptFileExtensions “asp” “dll” “exe” “svc”
CSCRIPT.EXE ADSUTIL.VBS SET W3Svc/Filters/Compression/GZIP/HcDynamicCompressionLevel 9
CSCRIPT.EXE ADSUTIL.VBS SET W3Svc/Filters/Compression/DEFLATE/HcDynamicCompressionLevel 9

4) Restart IIS (you can right click on the computer name in the IIS console, choose All Tasks, and select “Restart IIS”). Not 100% sure this is necessary.

5) Wait. It took my server on Amazon EC2 approximately 3 minutes before the changes from step #2 flowed to the metabase.xml file. You can always go check by looking at the date modified of c:\windows\system32\inetsrv\metabase.xml

In the end, you can test that it’s working by going to PipeBoost and typing in the url of your .svc file. Also, when running your app, you can use a tool like fiddler2 to show you the data actually coming down to the client along with an indicator that it is compressed.

That’s it, don’t do a think to your Silverlight app but watch it instantenously start downloading data faster!

I finally got my entry to the MIX 10K contest accepted!

The idea was inspired by MyPadlock Password Manager: a simple, secure, and free password manager developed by yours truly. The 10K entry is a fun little Silverlight app that lets you store any kind of text data behind one master password. That data is then encrypted with your password and stored in isolated storage.

Check it out

Click here to check it out. There’s a fun animation entering your password and locking it back up again (with the X in the upper right that appears once you’re in the program).

Developing this was a good test of borrowing something written in WPF and getting parts of it to run in Silverlight. Implementing the behavior I wanted was super fast and the hardest part by far was systematically destroying the readability of my code as I whittled it down from ~30K to just under 10K (with fewer than 100 bytes to spare!) I kept thinking, I sure hope I don’t have to fix a bug after I strip out all spaces, rename my variables to single characters and all the other ugly tricks I had to do to shave off those unwanted bytes.

Hope you all enjoy this fun little project and be sure to check out MyPadlock Password Manager if you’re not already using something to keep your usernames and passwords safe.

I’ve tried to avoid commentary blogging but in this case can’t help but offer a few opinions and predictions on the 3 biggest scalable hosting options today for Windows servers.

Over the last year I’ve spent a lot of time thinking about scalable hosting and working with both Mosso Cloud Sites and Amazon EC2. I’ve recently also been able to get down to business with Windows Azure hosting and have developed some opinions about how the three stack up against each other. This is by no means meant to be an exhaustive comparison of the three but merely my impressions from a 30,000 foot view on the choices for scalable Windows hosting today.

Control: Amazon EC2 offers the most control. Aside from the internal guts of EC2 and the ability to deploy multiple instances, Amazon EC2 is for all practical purposes a dedicated server. You control every aspect of it and are responsible for it as you would be for any other dedicated server. The up side to this is that if you need to install 3rd party software on your hosting environment, impersonate users, or do any number of other rare and unusual things that you can’t do on anything but your own box, you have the all the control you could want with EC2. The downside of course is that you have a lot of rope to hang yourself with and Amazon isn’t going to come to your rescue when you do so.

Ease: Mosso and Azure definitely share the prize here. Who wants to worry about installing security patches, deploying new server instances, user permissions…etc. As a developer before an IT administrator, I want to spend my time developing, not configuring and mainting servers. Let the Mosso staff (same company as Rackspace) or the folks at MS (the company that wrote Windows Server) do this work for you. Upload and scale w/o worrying about much more than your code.

Price: Both Mosso and Amazon EC2 are around $100/mo. Windows Azure pricing has yet to be announced. The big question in my mind is whether Azure will be like Amazon S3 where if you have super low usage you get charged practially nothing? With S3, I once saw a bill for litterally 1 cent! Or, is the pricing going to be like Amazon EC2, where your cost to host a site that is practially never hit still costs a minimum of almost $100/month?

Conclusion: Why use EC2 unless you need the extra control? – Which by the way, I did need the control and settled on EC2 for my latest project. But as time goes on and we rely less and less on legacy code, server component vendors start using best practices to write their components, and the standardization of web services,  more control will be necessary less often. Amazon may eventually offer something that competes with Mosso and Azure in terms of ease of use, but for now the two stand alone in this niche rivaled only by shared hosting (which of course lacks the scalability).

Azure has potential to dominate the market but this will all depend on pricing. Azure can go down the road of Mosso and EC2 and compete purely on features, or it can be priced in a way where you pay for only the horse power that you use and open the cloud computing flood gates and admit those with tight budgets. This won’t matter to many medium and large companies who need at least one dedicated server but it matter tremendously to everyone else who needs less than a server to host their applications and sites. If MS wants this audience (presumably the majority of those that need hosting), it must make it’s price scale like it’s hardware.

Some examples of who would be left out if MS takes the EC2/Mosso pricing model: imagine an entrepreneurial developer with a great SaaS idea. Either they keep their costs low until it proves itself by hostting at a place like GoDaddy or they shell out $1200 / year to make sure it can scale. Or, how about the local restaurant who isn’t shooting for an international web presense. GoDaddy can get you hooked up with ASP.NET and SQL for $4/month last time I looked. Why would you ever pay 25 times that for something like Mosso or EC2? And last, imagine the programmer who writes a simple little web service to perform some small function for an in-house app they write. Either they piggy back on some other server (probably being used for mission critical functions) or they create a special server for themselves. If you’re a developer, ask yourself: how many times have wondered or asked, “which server should I put this on?”

Here’s a chance for MS and Azure to really change the world of software and cloud computing. By choosing a pricing model that scales at the low end, they could essentially eliminate cost as a constraint to launching an application in the the cloud. Never will a developer abandon an idea because it costs too much to make sure it could scale. If they go with the Mosso/EC2 pricing model where you get charged a relatively large amount for having a nearly idle server, then the majority of programmers will be left to suffer traditional, unscalable, shared hosting a little longer while those of us with the bigger budgets will have 3 great choices for scalable Windows hosting.

I finally found some time to try out my preview account on Windows Azure and the new January CTP of the SDK and VS tools and thought I’d share some my impressions & some hurdles I ran into while getting up and running.

1) To debug your application locally you need to be running a local instance of IIS which I didn’t realize until trying to run my project in VS that I hadn’t actually added to Windows. I guess I’ve been so spoiled with VS.NET’s built in localhost that I didn’t have a need for a local instance of IIS until now. I remember the day when this was one of the first thing I did after installing Windows. Looks like a return to those days.

2) To run the development fabric (the thing that allows you to simulate and debug Windows Azure on your workstation), you have to run VS.NET as an administrator. So far I’ve forgotten everytime I’ve gone into my project and I’m sure it won’t be the last. It’s kind of a bummer when you launch VS, load your project hit F5 and Arg!… I have to start all over. Yes, I get impatient when it comes to repeating my own mistakes 🙂

runasadministrator

Note: Turning of UAC does not eliminate this.

3) I ran my app locally and all I got was a blank white page instead of my SL app or an error. Fortunately I’ve ran into this more than once now on Windows Server so the problem & solution were still lingering somewhere in the back of my head: the xap mime type wasn’t added to IIS. Once I realized this, a quick search on google yielded the solution and a minute in IIS was all it took to move to the next problem…

4) Next, I added a reference to my WCF service via the ‘discover’ feature in service references and it was added as http://localhost:12404/Service1.svc. However, the Azure development fabric actually runs the app under: http://127.0.0.1:81/. It only took a quick glance at my address bar in IE to discover this and realize that my service was probably running on port 81 too. Changing ServiceReferences.ClientConfig to the new service url was all it took.

5) Last, I received HTTP error 403.3 when trying to hit my local .svc file. This time I was prepared because of the “xap incident” (#3 above). Again, I needed to add a mime type for .svc files as well. As with the xap file extension problem, it only took a few seconds on google and I was up and running with the fix.

Finally, I was in business running locally and ready to deploy! I wanted to see my app and service running in the cloud… no time for reading documentation right!? Well the publish experience for Azure was made for people like me. I right clicked on my startup project and chose ‘Publish’ not entirely sure what to expect and was pleased to find the whole process very intuitive. Up came a web page to upload your package (.cspkg) and configuration (.cscfg) files to along with the folder where those two files resided.

Simply upload the two files and start your server instance (staging or production) and away you go. Publishing wasn’t quite as easy as publishing to an ftp site but I had no trouble figuring out what to do and in no time I had my app running in a staging environment and moments later running from my vanity url. Very cool! There was a little confusion for a few moments because after the management console reported my instance as “Started” it still took a minute or two before it worked in my browser. In the words of Axle Rose and Yoda, I just needed a little patience.

All in all, I was a little dissapointed with the experience in Visual Studio and worry about first impressions of those not as familiar with VS development. Then again, VS.NET 2008 was out the door long before Azure hit the scene, so I’d expect a little retro-fitting to be required to get VS to play nice with Azure and the development fabric. Hopefully in VS2010 it will all be much more integrated as ASP.NET apps are in VS today.

P.S. You can see the fruits of my labor on my previous post where I created an application to peer into Silverlight’s BrowserInfo and ASP.NET ServerVariables collection.