JSON and RESTful Web services using ASP.NET MVC WebApi–Its what every Web Programmer should know !

In my last post I indicated as to how Microsoft & MS technologies have made programming for the web simple and easy. I also suggested how these same MS technologies have created misinformed web developers (like me) as well.

All the abstraction over the RESTful nature of HTML (or HTTP) have solved at least one purpose. They have managed to “abstract” a large number of developers from the inner workings of HTML or for that matter the HTTP/web. Well, I would say making an Intranet business Application using ASP.NET does not necessarily require you to have any knowledge regarding JSON or the RESTful nature of Web for that matter. Using Web forms and WCF you could easily create a Service Oriented Application without much knowledge of how the web actually works. But if you just dive in a bit deeper you’ll get to know how WCF has its own quirks.

Ok, enough of “MS bashing” and “Developer belittling”. Learning the ropes of JSON is not that tough I would say.

Ironically, with the recent emphasis MS has laid on introducing Open source packages into our VS projects, it has become easier to learn things like JSON and programming in HTML as it should be(RESTful) using things like ASP.NET webapi.

Lets get over that block in our head and quickly create a running JSON example (using Visual Studio of courseWinking smileSmile with tongue out)

Before Starting, it would be great to have some background.

So, What is JSON?

Its short for JavaScript Object Notation. In theory we can describe CLR object as an equivalent Object in JavaScript. As you know JavaScript is not a strictly object oriented language, BUT it has some features of OOP Language.

Why is JSON popular?

So, folks using WCF should know that the data transferred and consumed in a web application is usually in the Form of XML. That is any CLR object could be easily represented as an equivalent XML document. JSON is quite similar but has a lot less characters when representing the same data. Hence its lighter and faster across the wire. Also, its very easy to consume JSON objects in HTML/JavaScript as they are actually JavaScript objects.

What do we mean by RESTful nature of Web?

REST stands for Representational State Transfer. What it says is that any request in/over HTTP should be one of the following types:

  • GET – Get a resource from a server
  • POST – Post/Insert/Input some information on a server
  • PUT – Update some information on a server
  • DELETE – Delete some information on a server

These are the standard “verbs” (along with a few others) supported by HTTP protocol.

This is where ASP.NET MVC WebApi comes into picture. It helps us create a Web service on the paradigms of REST. Its similar to WCF with the difference that an ASP.NET WebApi service is usually consumed by a Client(browser) and we have more control as to what format (XML/JSON) flows through the wire.

Enough of theory, Lets get to the Hands On!

1. Open File –>new Project and select ASP.NET MVC 4 –> Internet template


2. Add a new New Class to the Models Folder of your Solution as below :

public class Person
public string Name { get; set; }
    public int Age { get; set; }
    public string Sex
{ get; set; }

This is the class which we’ll be serializing and consuming in JavaScript

3. Right click on Controllers folder and Add a new Controller; Select Empty Api Controller from the dropdown options as shown below :


4. Add a new method returning IEnumerable to the PersonApiController just added. Our objective is to send over a list of persons from the server and consume it via JSON in JavaScript.

public class PersonApiController : ApiController

public IEnumerable Get()
List lstPerson = new List() {
new Person(){Name=”Rahul“,Age=28, Sex=”M“},
new Person(){Name=”Chinmoy“, Age = 28, Sex=”M“},
new Person(){Name=”Charu“,Age = 27, Sex=”F“}

return lstPerson;

Just one more thing. Since its a RESTful service, we need to decorate our function with one of the HTTP verbs. In this case we’ll use [HttpGet]

We are now done with the server side implementation. Now onto the client side of things.

5. Open up Index.chstml in the Home Views Folder and add the following markup at the end of the page. What we are doing here is that we are placing a button on whose click we’ll make a call to get a list of people. Upon successful completion, we are going to display the details in a table.


<input type=”button” id=”btnGetPeople” value=”Get People“>
<div id=”divPeople“>
<tbody id=”tBody“>
<script type=”text/javascript“>
var AllPeople;
$(document).ready(function () {

$(“#btnGetPeople“).click(function () {
//Do an AJAx Call here
$.getJSON(‘@Url.Action(“PersonApi“, “api“)’)
.done(function (data) {
// On success, ‘data’ contains a list of Person.
AllPeople = data;

for (var i = 0; i < AllPeople.length; i++) {
‘ + AllPeople[i].Name + ‘ ‘ + AllPeople[i].Age + ‘ ‘ + AllPeople[i].Sex + ”);



So what does the above code do?

  • It creates markup for a button and an empty html table
  • It declares a JavaScript variable AllPeople for storing the returned list of people. As you can see JavaScript supports implicit variable declaration.
  • It makes a Call to our PersonApi Controller using a jQuery method called $.getJSON(). Its this call that does all the magic.

The ASP.NET MVC WebApi is intelligent enough to identify the format in which data is requested. Here we are requesting JSON so the WebApi Controller action returns JSON; If we request XML it will happily return XML.

  • Now, once the web service call is “done” we iterate through our JSON object and populate our table as shown below.


Once you Click the “Get People” button and If you place a debugger and see how the AllPeople Variable looks like once populated by the returned “data”, it looks like below :


Its like any other array which we can iterate over

The final result is as under:


We have successfully consumed and used JSON in our ASP.NET application over a REST service using WebApi. That’s how easy it is folks!

Of course, this was one of the most basic examples I could come up with, BUT hey, you gotta start somewhere (if you haven’t already started)

WebApi is cool! JSON is great! (WebApi + JSON) = Awesome !!

If you are creating for the web and are not using these technologies, I would suggest you start being Awesome right away.

Building, consuming and deploying WebApi services has a few Gotchas, which I’ll discuss in a future blog post.

Do post your views/comments/feedback/questions below. I’ll try my best to answer them.

Operation Contract Overloading in WCF.. Not your usual Polymorphism!

There is no denying the fact that Microsoft has made life easy for Developers, both Windows and Web. Over the years MS have come up with great IDEs using which any High School Student could go ahead and build a fully functional application(Yes, it’s that easy).

Microsoft makes things easy for people by abstracting all the complexities of any technology or development environment. The underlying basics of any technology are abstracted into things like toolboxes and familiar syntax for a developer. BUT, sometimes the abstraction feels so real that we forget the very basic principles of the technology we are using.

Same holds true for WCF services. WCF services are actually web services with a lot of MS magic to make it look like any other OOP(Object Oriented Programming) technology. A lot of the magic can be credited to the familiar VS IDE and the C#/VB programming language.

Just because C#/VB(language used in WCF development) supports (function)Polymorphism, does not mean web services support it.

This is exactly what MS does to a lot of naive developers. Abstract all the intricacies and complexities of the real thing. Although this(the fact that web services do not support function polymorphism) may sound like “usual” to a lot of Web developers out there, BUT believe me, a whole lot of developers are not aware of this.

Its like JavaScript NOT supporting method overloading; BUT a lot of people who have never had a need for overloading methods in their JS files, do NOT know this.

But, getting around method overloading problems in JavaScript is a different blog post.

We’ll be discussing method overloading in WCF services here – Why it does not work and what options do we have.

Honestly, even I wasn’t aware of this fact a while ago, before a friend of mine (Rahul Verma) decided to enlighten me on this.

So, on to the issue. You build a WCF application in Visual Studio.

  1. You write out a Service Contract class say ServiceX.cs
  2. You write an Operation Contract say void DoWork();
  3. You write another Operation Contract say void DoWork(string message);
  4. Build the Project; the Project’s Build succeeds.

Now, As soon as you launch your WCF Web service (hitting something like http://localhost/ServiceX.svc) -> “BAM!!!” , you have an error message.

Server Error in ‘/’ Application.

Cannot have two operations in the same contract with the same name, methods DoWork and DoWork in type WcfServiceApp.IServiceX violate this rule. You can change the name of one of the operations by changing the method name or by using the Name property of OperationContractAttribute.

Well, as you can see the error message is Self Explanatory. What went wrong is that we assumed that Web services behave the same way as any Object Oriented language and that we could indeed overload methods in Web services.

The issue occurs when our IIS tries to get the metadata of the web service and generate the WSDL, at that point it throws an exception (for reasons stated above). MS might have had their own reasons for not putting compile time checks for this in WCF services(though it would be nice to have it).

Now, on to the solution and options available to us from here. But before that we need to get our head around 2 facts.

First, we need to accept the fact that method overloading is just a programming technique to make/keep our code manageable/maintainable. In some cases it also improves the readability of the code. BUT, its not something we cannot do without.

Second, there is absolutely no clean solution to get around this issue. You can never achieve/leverage the full benefits of method overloading in web services. That’s how the web is built.

Workaround #1

Use the Name property of Operation Contract Attribute to differentiate the methods in WSDL.

[OperationContract(Name = “DoWork1“)]

[OperationContract(Name = “DoWork2“)]

Result: The web service will build fine and host successfully, BUT when you consume the web service in a Client, the method names in the intellisense will be DoWork1() and DoWork2(string message)

So, we have achieved overloading on the server, BUT while consuming the methods we still do not have same method names. We have to call different methods.

Workaround #2

You can specify same name methods in different Service contracts. For e.g., you could have void DoWork() in IServiceX.cs and void DoWork(string message) in IServiceY.cs.

Result: The web service will build fine and host successfully, BUT when you’ll be consuming the web service, you’ll have to anyway instantiate different Client Channel/Proxys for IServiceX and IServiceY.

Hence, we still do not have proper method overloading .

Workaround #3

Now, if you are hell bent on having method overloading available on the client consuming the WCF service we have a hack for that.

First, Implement Workaround #1 then You could actually tinker with the generated metadata classes (Reference.cs) and change the method names/attributes of the generated methods.

This way you’ll have proper overloaded methods in intellisense.

The downside of this method is the next time the proxy class is generated, you’ll loose your manual modifications. That’s the reason the method is not very practical.

So, going by the above observations, it seems futile to try to achieve method overloading in WCF web services. Its best to avoid it in web services as implementing it could involve a lot of effort and  it does not seem to bring any real benefits to the bench.

Continuous Delivery and Auto Updating Applications

“You Have 2 Update(s) Available..”

Sounds Familiar !!

It is absolutely  essential to have such a provision inbuilt into our Desktop/Windows applications as it can prove to be a boon to any enterprise.

Personally, it brings a smile to my face every time I see an update for any one of my mobile apps. 🙂

We need to embrace what is know as Continuous Delivery Software Development model.

It’s a Software Development practice wherein we use techniques such as Automated Testing, Continuous Integration and Continuous Deployment to :

1.Achieve High standards
2.Easily Package and Deploy builds to Test Environments.
3.Ability to rapidly, reliably and repeatedly push out enhancements and bug fixes to customers at low risk and with minimal manual overhead
A Standard Continuous Delivery Model looks like the one below :
AS can be seen from the above illustration, having an inbuilt mechanism to update Desktop Applications at user’s end is most Critical to achieve Continuous Delivery BUT often most neglected.

We may have already put all the effort to setup a source control system, a build server, an automated versioning process and all are configured so that we can release a new Build with the press of a button.

BUT how do we get the new build version to the end users?

This is where an automatically updating application comes into picture.

If we design an application, which is able to query a server periodically (or on user action) for available updates and is able to update itself to a newer version, we save a lot of time, effort and money invested by IT support to distribute application updates.

All of this might be irrelevant to a Web application, since it runs on a browser, BUT for a WinForms application, its indispensable in this Connected Age.
For a WinForms application running on a Desktop, a Developer has to sooner or later think about an approach to implement an auto-updating mechanism.
But, this is something which should ideally be a part of the very Design of a Windows Application. It should be an integral part of the application and not a separate patch later on in the development phase.
Designing a Custom Update Mechanism takes a lot of thought and effort, but there are certain  well established tools and libraries(Open source as well as licensed) available out there which can greatly help you out :
  1. ClickOnce
  2. BITS
  3. .NET App Updater Block
  4. NAppUpdate Framework (Strongly recommended; Highly extendible)
  5. NETSparkle

A feature comparison of the various options available is as under :





.NETApp Updater Block
Installation of updates in the same folder as the application (i.e. the updated version should NOT create a new folder on the Client as in the case of ClickOnce)




Updates over HTTPS or better still, a WCF service (more secure).       Yes



HOT Swap of update files (Ability to update files without restarting the application.)


Backup and Rollback of updates (In the event of failure)




Download progress


Conditional Updates (based on file version, hash key, size etc.)


Inbuilt file integrity check based on File Hash (This is required in order to ensure that the file has been downloaded correctly/completely)



Update Download and Update Apply process distinction (This is required in order to ensure that all the updates are first completely downloaded before they are applied in order to maintain the integrity of the updates/application)


Inbuilt Error handling




Ability to perform Cold Updates (i.e. applying updates after the application shuts down)


Should provide a lot of option/points to configure the application.


It should be easily extendible to accommodate new features/interfaces (like new type of update sources)


Ability to resume an interrupted update download.


All the above solutions might not fit into your project directly and may have to modified, BUT there are a few basic points we need to consider before implementing an Auto Update strategy :

1. Pull or Push Notifications : Do you want the user to query the update server or you want to show notifications to the users as a result of automatic periodic query to the update server

2. How much control should the end user have on the Update process: Can the user skip an update?

3. User Access Rights : A User should have adequate access rights on the application as well as the File System in order to update it. This has be enforced by the developer.

4. Supporting Rollback : What happens if an update fails? What happens if the user is not happy with the new version. Can the user rollback an update? Will the rollback triggered from server or simply a user action on the client.

5. Monitoring version Fragmentation: We’ll need to maintain record of which all devices are running on what versions of a software. We’ll need to keep a tab on maximum number of different version of a software we can serve on different devices. The more the version fragmentation, greater the overhead for a developer.

So, if you’re designing a Desktop Application, its absolutely essential to have an Auto-Update mechanism. It’ll make life easy for Dev Team, Release team, IT support  as well as the end users (or the Enterprise).

Programming Ethics!!

Ethics are an integral part of everyone’s life. Although, they are far from being absolute, they influence a lot of our decisions.
Also, not to mention, since ethics are NEVER absolute, we usually find ourselves negotiating and compromising on for the other.

The same applies to programming as well.
As they say :

If a code compiles and passes the unit test(s) doesn’t always mean that it’s written right

All our ethics and actions driven by those are primarily ruled by 2 school of thoughts :

  1. Utilitarian Perspective : It says your actions and means are evaluated based on the consequences, i.e. If the end result is good then the means and actions are good.
  2. Deontological Perspective : This school of thought takes into account the action(s) itself and evaluates the end result based on the type of actions and means used to reach it.As you might have experienced yourself that following a Deontological approach to programming is a little painstaking at times. Following a Deontological approach would expect us to do the following at least:
      a) following naming conventions
      b) following programming best practices like looking for memory leaks
      c) writing and updating documentation to reflect the design exactly
      d) putting proper comments wherever possible in order to make it understandable for the next person who picks up your code
      e) writing programmatic Unit Tests and constantly testing your changes against them

    … And the list goes on.

As you might have already decided that its almost impossible to follow all of ’em.

Most of the time following a Utilitarian approach solves the purpose.
Along with that we follow a few of the deontological actions to treat us with some Moral Balm.

But trust me, its nothing more than a moral balm.
Following a Deontological approach in life can land you in all sorts of moral and ethical dilemmas and may be excruciatingly difficult.

Take for instance the classic Moral Dilemma:

The driver of a runaway tram can only steer from one narrow track on to another; five men are working on one track and one man on the other; anyone on the track he enters is bound to be killed.

There are numerous arguments comparing both the schools of thoughts. The basic idea is to reach a point of “Greater good and lesser evil”.

That’s the reason I say that ethics can never be absolute. You just cannot follow one school of thought always. The approach you follow will more often than not depend upon what’s at Stake for “You”.

Fortunately, following a Deontological approach in programming never lands you in a Moral Dilemma. It just takes more time and effort to follow it.

But, at the end of the day you have a clear conscience. And by that I mean, a conviction that your code will not break, will perform well, would be scalable and maintainable.. Now and in the future.. Because “YOU” have made it that way.

The fact that nobody can question the code you’ve written gives a “kick” to a programmer like nothing else.

So, I suggest, each of us to not compromise on the Deontological approach when it comes to programming, ‘coz that’s what will make you a better programmer at the end of the day. Passing just functional tests will make you just another employee who does his/her job.

Happy Coding!!!

Posted from WordPress for Windows Phone

The Technical Interview – Scratch the Paper !!

Technical Interviews, be it .NET, Java, SQL or any other software area, most of the time do NOT always evaluate a candidate’s true technical Knowledge or acumen.

My Question is, as to what exactly should a technical interview asses?

  1. Should it asses the Candidates over their past technical experience, as in, what all technologies they have worked on and to what extent.
  2. Should it asses the  candidate based on how interested a candidate is in Technology currently. Is he/she learning about  and aware of the new an emerging technologies relevant to his/her field?
  3. Is the candidate having the necessary theoretical(Ahem! Bookish) knowledge about the the technologies he/she has worked on?

More often than NOT the interviewer ends up evaluating the candidate based on Points 1 and 3.

Actually, more on Point 3, because it is easier. And honestly I find that very unfair.

Why would an interviewer evaluate a candidate just on Point #3 ?

  • Because the interviewer himself/herself does not have sufficient Hands-on knowledge about the subject.
  • Because its easier to search for questions on the subject on MSDN or pure googling!!
  • Because the questions have a very narrow scope of the answers, so it is easier to evaluate a candidate response.


If you do NOT  qualify in a technical interview, don’t lose your Heart. More often than NOT it would be because the Interviewer was NOT competent enough to interview You in the First Place. 🙂

But, of course the above is NOT always true.

Candidates do get rejected, because of a lot of other reasons. But this post is NOT about the candidate, BUT about the interviewer. So we’ll focus on that!

So, what should we ideally be looking for in a candidate in a technical interview?

Should we NOT asses his/her basic(Ahem! theoretical) knowledge  on a subject?

Well, in my opinion, we should be more concerned about Point #2 in above more than anything else.

A candidate may have immense Technical Experience in the past and might have worked very close to technology, BUT the question is :

Does the candidate still have the appetite to learn new technologies or is he/she happy with the “way things are” ?

The answer to the above question is very important and yes, interviewers do take this point into consideration at times, BUT they end up asking theoretical Questions about emerging technologies. Again. 🙁

Why!! Well, for obvious reasons of course 😉 !

So, to round up what I am trying to say here, here are a few tips to “The technical interviewer” :

  1. Always come prepared. And by prepared I don’t mean “Google Prepared!”. 
  2. Never ever start a technical interview without a Pen and a Paper. It helps in gauging how much a candidate wants to put his/her point through. If the candidate is NOT comfortable with drawing boxes and lines and just orally airs his/her view then I guess we might have a “Theoretical Programmer” at our hand. Alarm!!
  3. Be a little kind to candidates. Candidates are a little edgy on some things they have worked in the past. Try to give them subtle hints about on how to get to the answer (Not always though).
  4. Try to ask questions which require the candidate to Scratch the Paper and NOT necessarily Scratch his/her Head. 🙂 i.e. try to ask more Hands-on and scenario based questions.
  5. Don’t be afraid to scratch questions/problem statements yourself on the paper. After all, You need to let the candidate know what is your methodology for evaluating a candidate.
  6. Try to evaluate how interested a candidate is in learning new technologies relevant to his/her field. Is he/she happy with happy with the way things are are they actively making effort to learn new technologies. Again, encourage them to scratch the paper.


So interviewers and Candidates, Happy Scratching!!