ASP.NET MVC Single Page App with Upida/Jeneva.Net (Backend)
INTRODUCTION
Let's try to create a simple web application using the most modern technologies and see what problems we may face. I will use the latest ASP.NET MVC with WebAPI and latest NHibernate. Please, don't worry, all the techniques are applicable to Entity Framework too (the downloadable ZIP archive contains Entity Framework example as well). I am going to use WebAPI to its full potential - i.e., all interactions between browser and server will go asynchronously in JSON. In order to accomplish this, I am going to use MVC JavaScript library - AngularJS.
Note, this article is about back-end only. If you are curious about front-end side, then follow this link: Asp.Net Mvc Single Page App with Upida/Jeneva (Frontend/AngularJS).
Let's imagine we have a simple database with two tables:
Client
and Login
, every client can have one to many logins. My application will have three pages - "list of clients", "create client", and "edit client". The "create client" and the "edit client" pages will be capable of editing client data as well as managing the list of child logins. You can see how it works here.
First of all, let's define the domain (or model) classes (mapping is defined in hbm files):
Hide Copy Code
public class Client
{
public virtual int? Id { get; set; }
public virtual string Name { get; set; }
public virtual string Lastname { get; set; }
public virtual int? Age { get; set; }
public virtual ISet<Login> Logins { get; set; }
}
public class Login
{
public virtual int? Id { get; set; }
public virtual string Name { get; set; }
public virtual string Password { get; set; }
public virtual bool? Enabled { get; set; }
public virtual Client Client { get; set; }
}
Now, I can create Data-Access layer. First of all, I must have a base DAO class which is injected withNHibernate
SessionFactory
, and defines basic DAO operations: Save
, Delete
, Update
, Load
, Get
, etc.
Hide Copy Code
public class Daobase<T> : IDaobase
{
public ISessionFactory SessionFactory { get; set; }
public void Save(T entity)
{
this.SessionFactory
.GetCurrentSession()
.Save(entity);
}
public void Update(T entity)
{
this.SessionFactory
.GetCurrentSession()
.Update(entity);
}
public ITransaction BeginTransaction()
{
return this.SessionFactory
.GetCurrentSession()
.BeginTransaction();
}
/* others basic methods */
}
I have only one DAO class -
ClientDao
.
Hide Copy Code
public class ClientDao : Daobase<Client>, IClientDao
{
public Client GetById(int id)
{
return this.SessionFactory
.GetCurrentSession()
.CreateQuery("from Client client left outer join fetch client.Logins where client.Id = :id")
.SetParameter<int>("id", id)
.SetResultTransformer(Transformers.DistinctRootEntity)
.UniqueResult<client>();
}
public IList<client> GetAll()
{
return this.SessionFactory
.GetCurrentSession()
.CreateQuery("from Client client left outer join fetch client.Logins")
.SetResultTransformer(Transformers.DistinctRootEntity)
.List<client>();
}
}
When DAO is done, we can switch to Service layer. Service is usually responsible for opening and closing transactions. I have only one service class. It is injected with respected DAO class.
Note, the
Save
and Update
methods accept a Client
object and its child Logins, and therefore perform save or update using NHibernate cascading (persisting parent and children at the same time).
Hide Shrink Copy Code
public class ClientService : ICLientService
{
public IClientDao ClientDao { get; set; }
public Client GetById(int clientId)
{
Client item = this.ClientDao.GetById(clientId);
return item;
}
public List<Client> GetAll()
{
List<Client> items = this.ClientDao.getAll();
return items;
}
public void Save(Client item)
{
using(ITransaction tx = this.clientDao.BeginTransaction())
{
/* TODO: assign back-references of the child
Login objects - for each Login: item.Login[i].Client = item; */
this.ClientDao.Save(item);
tx.Commit();
}
}
public void Update(Client item)
{
using(ITransaction tx = this.clientDao.BeginTransaction())
{
Client existing = this.clientDao.GetById(item.getId());
/* TODO: copy changes from item to existing (recursively) */
this.ClientDao.Merge(existing);
tx.Commit();
}
}
}
Let's talk a bit about controllers. I am going to have two controllers - one for HTML views (MVC controller), and one for JSON requests (API Controller). Both of them will be called
ClientController
, but will reside in different namespaces. The MVC controller will derive fromSystem.Web.Mvc.Controller
, and the API controller - from System.Web.Http.ApiController
. The MVC controller will be responsible for displaying correct view. Here is how it looks:
Hide Copy Code
public class ClientController : System.Web.Mvc.Controller
{
public ActionResult Index()
{
return this.View();
}
public ActionResult Create()
{
return this.View();
}
public ActionResult Edit()
{
return this.View();
}
}
The API controller is a bit more complicated, because it is responsible for interactions with database. It is injected with the respective service layer class.
Hide Copy Code
public class ClientController : System.Web.Http.ApiController
{
public ClientService ClientService { get; set;}
public IList<Client> GetAll()
{
return this.ClientService.GetAll();
}
public Client GetById(int id)
{
return this.ClientService.GetById(id);
}
public void Save(Client item)
{
this.ClientService.Save(item);
}
public void Update(Client item)
{
this.ClientService.Update(item);
}
}
Now, we have almost everything we need. The MVC controller will give us HTML and javascript, which will interact asynchronously with the API controller and fetch data from database. AngularJS will help us to display the fetched data as beautiful HTML. I assume, that you are familiar with AngularJS (orKnockoutJS), though it is not that important in this article. The only thing you must know is - every page is loaded as static HTML and javascript (without any server-side scripts), after being loaded it interacts with API controller to load all the needed data pieces from database through JSON asynchronously. And AngularJS helps to display that JSON as beautiful HTML.
PROBLEMS
Now, let's talk about problems that we face in the current implementation.
Problem 1. The first problem is serialization. Data, returned from the API controller is serialized to JSON. You can see it in these two API controller methods.
Hide Copy Code
public class ClientController : System.Web.Http.ApiController
{ ....
public IList<Client> GetAll()
{
return this.ClientService.GetAll();
}
public Client GetById(int id)
{
return this.ClientService.GetById(id);
}
The
Client
class is a domain class, and it is wrapped with NHibernate wrapper. So, serializing it can result in circular dependency and will cause StackOverflowException
. But there are other minor concerns. For example, sometimes I need only Id
and Name
fields to be present in JSON, sometimes I need all the fields. The current implementation does not allow me to make that decision, it will always serialize all fields.
Problem 2. If you take a look at the method
ClientService
class, Save
, you will see that there is some code missing.
Hide Copy Code
public void Save(Client item)
{
using(ITransaction tx = this.clientDao.BeginTransaction())
{
/* code that assigns back-references of the child Login objects */
this.ClientDao.Save(item);
tx.Commit();
}
}
Which means, that before saving the set up back-references of the children
Client
object, you have to Login
objects. Every Login
class has a field - Client
, which is actually a back-reference to the parent Client
object. So, in order to save Client
with Logins
together using cascading save, you have to set up those fields to the actual parent instance. When Client
is deserialized from JSON, it does not have back-references. It is a well-known problem among NHibernate users.
Problem 3. If you take a look at the
ClientService
class, method Update
, you will see, that there is some code missing too.
Hide Copy Code
public void Update(Client item)
{
using(ITransaction tx = this.ClientDao.OpenTransaction())
{
Client existing = this.ClientDao.GetById(item.getId());
/* code that copies changes from item to existing */
this.ClientDao.Merge(existing);
}
}
I also have to implement logic which copies the fields from the deserialized
Client
object to the existing persistent instance of the same Client
. My code must be smart enough to go through the children Logins
. It must match the existing logins with the deserialized ones, and copy fields respectively. It must also append newly added Logins
, and delete missing ones. After these modifications, the Merge()
method will persist all the changes to database. So this is quite sophisticated logic.
In the next section, we will solve these three problems using Jeneva.Net.
SOLUTION
Problem 1 - Smart Serialization
Let's see how Jeneva.Net can help us to solve the first problem. The
ClientController
has two methods that return Client
objects - GetAll()
and GetById()
. The GetAll()
method returns list of Clients
, which is displayed as grid. I don't need all the fields of the Client
object to be present in JSON. The GetById()
method is used on the "Edit Client" page. Therefore full Client
information is required here.
In order to solve this problem, I have to go through each property of the returned objects, and assign
NULL
value to every property that I don't need. This seems pretty hard work, because I have to do it in every method differently. Jeneva.Net provides us with Jeneva.Mapper
class which can do it for us. Let's modify the service layer using the Mapper
class.
Hide Copy Code
public class ClientService
{
public IMapper Mapper { get; set; }
public IClientDao ClientDao { get; set; }
public Client GetById(int clientId)
{
Client item = this.ClientDao.GetById(clientId);
return this.Mapper.Filter(item, Leves.DEEP);
}
public List<Client> GetAll()
{
List<Client> items = this.ClientDao.getAll();
return this.Mapper.FilterList(items, Levels.GRID);
}
.....
It looks very simple,
Mapper
takes the target object or list of objects and produces a copy of them, but every unneeded property is set NULL
. The second parameter is a numeric value which represents the level of serialization. Jeneva.Net comes with default levels, but you are free to define your own.
Hide Copy Code
public class Levels
{
public const byte ID = 10;
public const byte LOOKUP = 20;
public const byte GRID = 30;
public const byte DEEP = 40;
public const byte FULL = 50;
public const byte NEVER = byte.MaxValue;
}
The last step is to decorate every property of my domain classes with corresponding level. I am going to use properties .
DtoAttribute
from Jeneva.Net to decorate the Client
and Login
class
Hide Copy Code
public class Client : Dtobase
{
[Dto(Levels.ID)]
public virtual int? Id { get; set; }
[Dto(Levels.LOOKUP)]
public virtual string Name { get; set; }
[Dto(Levels.GRID)]
public virtual string Lastname { get; set; }
[Dto(Levels.GRID)]
public virtual int? Age { get; set; }
[Dto(Levels.GRID, Levels.LOOKUP)]
public virtual ISet<Login> Logins { get; set; }
}
Hide Copy Code
public class Login : Dtobase
{
[Dto(Levels.ID)]
public virtual int? Id { get; set; }
[Dto(Levels.LOOKUP)]
public virtual string Name { get; set; }
[Dto(Levels.GRID)]
public virtual string Password { get; set; }
[Dto(Levels.GRID)]
public virtual bool? Enabled { get; set; }
[Dto(Levels.NEVER)]
public virtual Client Client { get; set; }
}
After all properties are decorated, I can use
Mapper
class. For example, if I call Mapper.Filter()
method with Levels.ID
, then only properties marked with ID
will be included. If I callMapper.Filter()
method with Levels.LOOKUP
, then properties marked with ID
and LOOKUP
will be included, because ID
is less than LOOKUP
(10 < 20). Take a look at the Client.Logins
property, as you see there are two levels applied there, what it means ? It means that if you call Mapper.Filter()
method with Levels.GRID
, than logins will be included, but LOOKUP
level will be applied to the properties of the Login
class. And if you call Mapper.Filter()
method with level higher than GRID
, than the level applied to the Login
properties will become respectively higher.Problem 2 - Back-references
Take a look at the service layer class,
Save
method. As you see, this method accepts Client
object. I use cascading save - I save Client
and its Login
s together. In order to accomplish this, the childrenLogin
objects must have back-reference assigned correctly to the parent Client
object. Basically I have to loop through children Logins
and assign Login.Client
property to the root Client
. When this is done, I can save the Client
object using NHibernate tools.
Instead of writing a loop, I am going to use
Jeneva.Mapper
class again. Let's modify theClientService
class.
Hide Copy Code
public class ClientService
{
public IMapper Mapper { get; set; }
public IClientDao ClientDao { get; set; }
....
public void Save(Client item)
{
using(ITransaction tx = this.ClientDao.BeginTransaction())
{
this.Mapper.Map(item);
this.ClientDao.Save(item);
}
}
This code will recursively go through properties of the
Client
object and set up all the back-references. This is actually half of the solution, another half goes in this code. Every child class must implement IChild
interface, where it can tell about who his parent is. The ConnectToParrent()
method will be called internally by Mapper
class. The Mapper
will suggest possible parents based on JSON.
Hide Copy Code
public class Login : Dtobase, IChild
{
public virtual int? Id { get; set; }
public virtual string Name { get; set; }
public virtual string Password { get; set; }
public virtual bool? Enabled { get; set; }
public virtual Client Client { get; set; }
public void ConnectToParent(Object parent)
{
if(parent is Client)
{
this.Client = parent as Client;
}
}
}
If
IChild
interface is implemented correctly, you only have to call Map()
method from your service layer, and all back-references will be assigned correctly.Problem 3 - Mapping Updates
The third problem is the most complicated, because updating client is a complicated process. In my case, I have to update client fields as well as update children logins' fields, and at the same time I have to append, delete children logins if user has deleted or inserted a new login. By the way, updating any object, even if you are not using cascading updates, is complicated. Mostly, because when you want to update an object, you always have to write custom code to copy changes from incoming object to existing one. Usually the incoming object contains just several important fields to update, the rest are
NULL
s, and therefore you cannot rely on blind copy of all the fields, as you don't want NULL
s to be copied to existing data.
The
Mapper
class can copy changes from the incoming object to the persistent one, without overwriting any important fields. How does it work? Jeneva.Net comes with aJenevaJsonFormatter
class, which derives from the Json.Net formatter used in ASP.NET MVC 5 by default. JenevaJsonFormatter
contains some minor adjustments. As you know, every domain class derives from Jeneva.Dtobase
abstract class. This class contains a HashSet
of property names. When UJenevaJsonFormatter
parses JSON, it passes the information about parsed fields toDtobase
, and Dtobase
object remembers which fields are assigned. Therefore, every domain object knows which fields of his are assigned during JSON parsing. Later, Mapper
class goes through only assigned properties of the incoming deserialized object and copies their values to the existing persistent object.
Here is the service layer
Update()
method using the Mapper
class:
Hide Copy Code
public class ClientService
{
public IMapper Mapper { get; set; }
public IClientDao ClientDao { get; set; }
....
public void Update(Client item)
{
using(ITransaction tx = this.ClientDao.OpenTransaction())
{
Client existing = this.ClientDao.load(item.getId());
this.Mapper.MapTo(item, existing);
this.ClientDao.Merge(existing);
}
}
}
And here is Global.asax.cs, you can see how to set up
JenevaJsonFormatter
to be default formatter in your web application. Please, don't worry about switching from Json.Net formatter. If you take a look at Jeneva formatter, it derives from Json.Net and provides just minor changes. Execute this code in Application_Start
event.
Hide Copy Code
GlobalConfiguration.Configuration.Formatters.Remove
(GlobalConfiguration.Configuration.Formatters.JsonFormatter);
GlobalConfiguration.Configuration.Formatters.Add(new JenevaJsonFormatter());
JenevaJsonFormatter
also converts all property names to Java conventions: Name
becomes name
,LoginName
becomes loginName
. These conversions make JSON more portable and more JavaScript friendly. It also enables you to replace WebAPI backend with Java/Spring without any modifications.NOTES
Solving the above-mentioned problems is the biggest side of what Jeneva.Net can do. However, there is another interesting feature that can help you in implementing validation routines - both server-side and client-side.
You can find out more detail on how to implement validation using Jeneva.Net in this article:Validating incoming JSON using Upida.Net/Jeneva.Net.
And also, you can find out how to create Single-Page web Application (SPA) using AngularJS in my next article: Asp.Net Mvc Single Page App with Upida/Jeneva (Frontend/AngularJS).
0 comments:
Post a Comment