Sunday, December 27, 2015

Best practices for good Restful API design

Ironically, I am going to start this with a line that a good API design is not easy to achieve. People often don’t give it much of a thought before starting designing the APIs and a bit later in the lifecycle they realize that they have to write too much of a repeated code or it’s getting hard to maintain or cope with the consumer application’s needs.

It is very important that even before you write a single line of code for you API, you think through end to end what is the purpose of your API and who will be consuming it. Remember, unlike website, your API consumers will be developers only who are generally impatient and always prefer to integrate quickly and easily. So before crafting your API, you need to consider below aspects of your API carefully.

Data design & structure


SOAP or JSON? – Now-a-days at least this decision has been simplified as everyone is happy with consuming JSON.

Secondly, you need to understand what data you will be exposing with your API and accordingly design your API’s endpoints and methods. This is very important as this describes how the endpoints of your API will look like and if they logically doesn’t make sense, people will end up using wrong ones making mistakes.

And finally, who are the consumers of your API. Is it generic or business specific and accordingly you can decide further aspects of your API.


Vocabulary


What HTTP verbs to use and where. Everyone sort of knows HTTP GET & POST but there are 5 other verbs available as well which you can use to enhance the experience of your API. Also, even for security perspective, it is important to use right verb for some actions. Let’s see what verb is made for what:-
  1. GET:  Get is equivalent to SELECT to fetch some information from the server.
  2. POST: Post is used to Create a resource on the server.
  3. PUT: Put is equivalent to Update when the whole resource is passed as a request.
  4. PATCH: Patch is again for update but only the information changed is sent as a request.
  5. DELETE: Delete is used to remove a resource from the server.
  6. HEAD: It returns metadata about the resource for example, when was the last time it was modified.
  7. OPTIONS: Returns what you can do with the resource for example, is it read only or read write etc.
Now, browsers tend to behave differently for all above verbs. For example, HTTP GET requests can be cached by the browsers. So make sure you are not using this verb for something which has to be unique like returning a unique GUID or next available counter etc.

POST are used for secure communications and hence browsers can warn you doing second time POSTs. HEAD & OPTIONS are not that used but HEAD is more like GET without a response body and can be cached by the browser.


Security


APIs are purely for information sharing and hence need to be secured with top level of security whatever you can implement.  I have seen API security evolved since I wrote my first web service from transport level to Identity aware now-a-days.

Your API communication should always be on SSL/TLS to make sure network level security. Then you need to add some sort of security to your endpoints as well just to save it from DOS attacks and depending upon the type of data your service returns, you can include the identity verification logic to your endpoint i.e. person A querying the data can only query his/her own data.


Response Status codes


It is important that your API shouldn’t throw errors and break consumer’s code. The best way to do this is handle everything in your code and return HTTP status codes. Don’t invent your own codes but use the HTTP codes assigned for these purposes as browsers already know how to behave on those. See below some known HTTP status codes you can use:-
  1. 200 OK – Used commonly against GET requests where data/resource requested from the server has been found & returned.
  2. 201 CREATED – Used against POST/PUT/PATCH where the request has been fulfilled and resulted in a new resource being created
  3. 202 ACCEPTED - The request has been accepted for processing, but the processing has not been completed.
  4. 204 No Content - The server successfully processed the request, but is not returning any content.
  5. 400 Bad Request - The server cannot or will not process the request due to something that is perceived to be a client error (e.g., malformed request syntax, invalid request message framing, or deceptive request routing)
  6. 401 Unauthorized – Authentication failed
  7. 404 Not Found - The requested resource could not be found on the server
  8. 500 Internal Server error - A generic error message, given when an unexpected condition was encountered and no more specific message is suitable.

Courtesy Wiki for above list. These are just few of the HTTP status codes I listed above, you can use any of them as per your requirements.


Versioning


No matter how much thought you put in designing your API endpoints & behavior, they will change. APIs evolve and you will be required to change the contract to add new features. Remember API contract is sort of agreement between client and server and changing that will break client applications and that is the reason you need to version your endpoints.

Hence keep a good versioning strategy in place and instead of changing existing contracts, release a new version. This will keep the existing client applications running and give new client the functionality they want. Another benefit of doing this is you can mark the older methods as deprecated and promote usage of new version of your API.


Analytics


This is not really a functional requirement but can be really handy in taking major decisions around your API. Always have some sort of analytics in place on your API endpoints and methods.


Documentation


I know this is not a favorite topic of developers but one of the crucial factors for your API consumers. Remember, if you end up integrating with third party API in your application, what would be your first question. Is there any document available for this?

So keep in mind below things to include in your API documentations:-
  1. Endpoint/methods names and version available in the API.
  2. Sample request/response for methods
  3. Response status codes and what they mean in your API
  4. Error messages associated with your endpoint methods this can include both validation errors & business error messages
One more thing you can add to your API if you have time for this is a experimental console where developers can play with your API. It is not difficult if you use some out of the box tools for this. One I know of and is very good is Swagger.

This is the list I have and obviously you are not limited to this and can go any far in making your API self explainable.

The last thing I want to discuss is “Hypermedia API”. It’s been a while since these API are known but not really that popular yet but I definitely think this is the future of APIs.

Hypermedia APIs


Hypermedia APIs are designed to overcome the obvious limitation with any (REST or SOAP) APIs currently exists. In order to consume any API today, you need to know the URL of the endpoint before hand. Not only this, this URL is part of contract you share with client meaning changing the URL breaks the contract and the client application.

It is not a complicated concept to understand. Hypermedia is based on the concept of how users interact with any website on web browser. You navigate to the root url of the website, then you are presented with links to go further whatever you wish to do on the site and so on and so on until you reach to the page where you find what information you are looking for. Hypermedia APIs mimics the same behavior in API world.

In Hypermedia APIs, the URLs are not hardcoded but discovered at runtime. This may doesn’t sound like a revolutionary idea but it gives API developers the freedom of changing and enhancing the API without breaking client apps. This idea can be used for scaling your applications or transparently test your API without affecting anything else.

Consider a small example (taken from WIKI) below:-

GET request to fetch an Account resource, requesting details in an XML representation




Here is the response:















Note the response contains 4 possible follow-up links - to make a deposit, a withdrawal, a transfer or to close the account.

Sometime later the account information is retrieved again, but now the account is overdrawn:












Now only one link is available: to deposit more money. In its current state, the other links are not available. Hence the term Engine of Application State. What actions are possible vary as the state of the resource varies.

A client does not need to understand every media type and communication mechanism offered by the server. The ability to understand new media types can be acquired at run-time through "code-on-demand" provided to the client by the server.

You can also implement an intelligent security around your APIs using Hypermedia showing next actions only to authorized audience.

This is just a small example around what Hypermedia API can do. You can search lot more detailed architectural and functional benefits of Hypermedia API.

Hope you would have found this information helpful.

Friday, December 25, 2015

IIS Policy Agent does more than Injecting Headers

I haven’t touch base around Policy agent in my last blog while introducing ForgeRock components but Policy Agent is kind a part of OpenAM itself.

Policy agent is a traffic inspector which intercepts each incoming request to your web server and check whether user is allowed to access the URL or not. If the URL is configured as a secured URL then Policy agent intercepts that request and sends the user to OpenAM login page. User is then authenticated here and then sent back to the original request he/she was trying to access. Below picture sort of depicts what I am trying to explain here. (Image downloaded from Google)




Policy Agent is also capable of injecting headers in the request as well for authenticated users. The headers can contain attributes which application can use to setup user session. Policy agents come in different flavors depending upon what type of application and web server you have. You can have IIS Policy agent, Apache or J2EE.

Theoretically, you should be able to use any above your application as all Policy agent has to do is to intercept traffic and send users to login page when required but recently I learned something interesting, IIS Policy agent over .Net application does not only intercepts the traffic and inject headers, it does also set the security context of the user in the application after which if you check Request.IsAuthenticated attribute in your app, it will return as true the same doesn’t happen with Apache Policy agent.

So let’s just check two scenarios below.

First, I have an Asp.Net application written protected by IIS Policy agent on my local IIS. The app has no custom code written in it but it just prints the header values & values of some security context variables.

See below what it prints post login from OpenAM. The value in Green is the OpenAM Cookie. The values in Blue are my headers but the values in Red are surprisingly all True.  IIS Policy agent post authentication sets the identity in WindowsPrincipal & GenericPrincipal after which if you do Request.IsAuthenticated, it returns true.





















In second scenario, we have the same application code hosted on a separate website in IIS which is now protected by Apache policy agent also running locally on my machine. When I try to access this site, Agent running on Apache send me to OpenAM login page where post authentication, I again lands on my application but notice the difference between the values in RED.




















    Apache Policy agent is not setting the user identity in WindowsPrincipal or GenericPrincipal due to which, If I do Request.IsAuthenticated, it returns false even after successful login after OpenAM. That is something you don’t want in .Net applications after you log in.

    Ideally you shouldn’t be using Apache Policy agent for IIS websites as the security should be closed to your application but we only had to do because, OpenAM IIS policy agent is not capable of securing multiple websites on IIS and 4.0 they released which can do this has it’s own problems.

    Anyways, I still need to understand how IIS Policy agent sets this security context for the application and how can we mimic this behavior with Apache. Will share this as well when I will figure it out.

Thursday, December 24, 2015

A new topic in life - ForgeRock

It's been a while since I have written anything on my blog and the reason has been ForgeRock. In IT life, every two years you are introduced to a new game for which you have to learn the rules and play or else you will be sitting in the pavilion :)

Hence been a ForgeRock monkey from last 6 months hopping over the branches of OpenAM, OpenIDM & OpenDJ. I am not going to share my plan & feelings around this game but overall I quite liked it and hence I am writing a quick intro about ForgeRock technology in this blog.


ForgeRock


ForgeRock is a company providing Identity & access management solution via it’s multiple products called OpenAM, OpenIDM, OpenDJ, OpenIG & OpenUMA. Originally part of Sun but when taken over by Oracle, ForgeRock became a separate entity in itself. You can read more about this on wiki here.

For technical people, it is important to mention that it is a Java based product. Although not pure Java required most of the times but all customizations happens in server side JavaScript. And when we talk about Java, the remaining components automcatically switches from IIS to Apache & Tomcat. Windows to Linux and so on.

You can imagine how exciting my life would have been in last six months being a purely Microsoft guy and getting into all this. Anyways has been a good experience and all these bloody things are no longer a black box for me as well so let’s get into it.

OpenAM


OpenAM in ForgeRock’s family is responsible for access management like Authentication, SSO, adaptive risk, federation and all. It is highly scalable, modular & customizable product. You can read about OpenAM here.

OpenIDM


OpenIDM is Identity management. This one gives you out of the box functionalities around various use cases of identity management like user provisioning to backend systems, user self service, work flows around different processes & since now-a-days everything is in cloud, it gives you connectors for various SAAS products like Google, SalesForce & Office 365. You can read more about it here.

OpenDJ


And as we need some sort of database or directory in all applications. We have OpenDJ for this in ForgeRock stack. OpenDJ provide directory services with high performance, scalability & availability. You can read about this here.


OpenIG


This is a new member in ForgeRock stack and basically can act as a identity gateway for you legacy applications, APIs providing lots of out of the box functionalities like password capture and replay, API security etc. Read more about this here.

OpenUMA


Again this is a new addition in ForgeRock stack which is User managed Access. Quite powerful when I saw the demo of this but we aren’t using it so far. Read more about this here.

These are all the products which ForgeRock has to offer for identity and access management. So far using it, a good thing I can say about it is “it is highly customizable”. A bad thing I have to say about it is “it is highly customizable”. The problem is, everything is customizable and you have to configure it which makes you feel like you have been forced to sit in the cockpit of Boeing 747 without a manual.


But still, a good product if used in right environment & infrastructure by right people. I will share few of the use cases of what you can do with all these OpenAM, IDM & DJ in my blogs soon along with Microsoft stuff. Yes, I am not going to leave Microsoft Azure & whatever I used to do :)

Wednesday, July 22, 2015

SAML for your application is not enough !

If you have implemented authentication in your application using SAML, this will interest you. Although you might have not made the same mistake I did, but believe me, I thought this is out of the box functionality of ADFS or any other technology which issues SAML tokens.

Let me come to the point straight away. The SAML token issued by ADFS to my application was in plain readable XML format in my browser. I was astounded finding that I can read the entire SAML response in web debugging tool and there it was, all the information about me & my IDP. Pants!!! How the hell that happened? It took me around half an hour to recover and then I started digging.

First let’s just quickly understand how SAML works. I found this image online which is detailed enough to understand this protocol.




























In steps:-

  • Step 1: You try to access a secured website on your browser. In this case, saleforce.com.
  • Step 2 & 3: Saleforce.com says, you are not authenticated and redirects you to let’s say your company’s active directory login page. This is a 302 Get redirect which you can see in your browser.
  • Step 4 & 5: You provide your credentials at SAML identity provider and if valid, it redirects you back to salesforce.com with SAML token. This is now a 302 post redirect as it has the SAML token data.


And here lies my problem. I saw this step 4 & 5 302 post redirect in HTTPFox and there it was, my SAML token in plain XML in post data. See below:-








So today's new learning, by default, your SAML tokens are not encrypted between the identity provider & service provider.  You need to perform few more steps to make sure your tokens are encrypted and no one can see the assertions/attributes of your SAML token.

Now obviously, this only happens after successful login and people might argue that if some can login into your system then they already sort of bypassed the security but no. I blogged sometime back about how if you can fake claims then you can get into the application without IDP ever knowing about it. This SAML token contains assertions/attributes not only about the logged in user but also about your IDP and the type of trust your IDP & SP has. If hacker gets his hands on this token then your system is open to a logged in Bruteforce attack.

As I said, after I found this I started digging and here is the official documentation of OASIS on this. It clearly explains the type of SAML assertions, protocol and possible hacks which can be tried against them. The best practice of SAML communication is to encrypt the token. Even that is not going to save you from all sorts of hack but something is better than nothing.

After I found this, fixing the issue was not really a rocket science. All you have to do is upload the public key of your certificate (.cer) on your identity provider and install the certificate with the private key on service provider to make sure the SAML token can be decrypted when received here.

Now nothing is straight forward in IT and here also I have to do bit of RnD to make this working and below are the findings & best practices to do this:-

  • First the certificate you will use for this must either be self-signed, or signed and chained directly to a public certification authority.
  • Certificate files that contain only a public key (.cer file) must be DER-encoded. Base64-encoded certificate files will result in a validation error.
And finally, this is more of ADFS specific issue. I managed to get the assertions encrypted in the token but not the metadata and after bit more digging, I found, in active directory the default is to encrypt the assertions only and not the full message. You can change this if you have access to your AD which unfortunately not applicable in my case. If you wish to change this, use below command in powershell.

Set-ADFSRelyingParty -TargetName MyRP -SamlResponseSignature "MessageAndAssertion"

Once you do this, your SAML token will be completely encrypted and only readable by your SP & IDP. There would a slight impact on performance on your overall communication but I think when it comes to security, this is acceptable.

Remember, most of the things which are running and not being hacked so far is not because people can’t do it but because no one is trying J so always keep your application security at the max level you can.

Wednesday, July 15, 2015

Application security - 5 things you don't waana do

Let me start by saying and I quote it, “in today’s world everything which is necessary is not secure”.

Now-a-days, everyone knows the significance of application security but very few take it seriously and try to implement safe guards to avoid any breach. For rest, it is more desirable than imperative.

Let’s face it; we have stories in backlog not security. We accept a reduced security or vulnerable flow just because we know a proper secured communication will take more time & money either of which are usually in shortfall. But what we can do at least is not to make things to easy for a hacker by leaving backdoors open which are mere programming errors and not a security flawed design.

When it comes to security, there are too many things to worry about.  Input validation, broken authentication, lack of authorization, misconfiguration of security, insecure direct object reference, improper error handling leaving sensitive information exposed, open redirects anywhere and the list goes on. Almost all hacking techniques are designed to exploit these loopholes and gain access in your application/servers/communication.

I am not targeting any specific programming language because the 5 things I have mentioned below are implemented in all. So let’s start.

Input validation


By very nature of this business or technical requirements, I believe the code should always be defensive. It helps you to deal with both kinds of people, extreme stupid and very clever ones. You really don’t want anyone breaking your application by just entering something in the input fields which nowhere in the world belongs there. You can’t help it; common sense is not really common anymore. But this category is also not our main problem. The problem is the ones who are very clever and know from where the validation is missing and how it can be exploited.

Common input validation hacks include SQL injection, HTML injection, Cross Site Scripting (XSS), buffer overflows, application content identification, phishing etc. A hacker can inject html, JavaScript or unlimited data into your application’s input fields until either it breaks or does something on the server which it never suppose to do.

This all can be avoided by taking few simple measures. First, always expect the unexpected and put validations on everything. Second, implement validation both on client & server side. The web browser is an untrusted, uncontrolled environment because all data coming from and going to the web browser can be modified in transit regardless of input validation routines.

Authentication & Authorization


Well, nothing to say here. These are the bouncers or gate keepers of your application. Any flaw here will results in unwanted guests inside your application.  Just implementing the login in the application doesn’t make it secure. There are more security hacks then security solutions available now-a-days. You have to implement the security in a way that no one can guess and mimic it.
Common issues with authentication implementations are:-
  1. Transmitting credentials in header over HTTP
  2. Passing tokens, sessionIds as url parameters
  3. No session management, timeout implemented
  4. Unsecured/open form submission without Captcha
  5. Open redirects post authentication without validation
  6. Permissions not checked before execution

Above are just few things but if not implemented properly can lead to things like Denial of Service (DDos) attack, Man in middle attack, Session hijacking etc.

It is really easy to implement above properly. Most of the programming languages come with an out of the box functionality to implement above correctly. All you have to do is use them and make sure you are not leaving something which someone can exploit.

Error Handling


A good quality error handling not only ensures security but a better application and enhanced end user experience. Who likes an application which gives you a yellow screen of death (at least developers will understand what yellow screen of death is).

For now let’s just see how an improper error handling can lead to security flaws. See below and tell me how many things this error exposes of this application.




















You might think, this is just an error. Not really exposing much details but for a 
hacker even this much information is enough to fire much closer hits to your application. And remember, you need the luck always but hackers only need it once.

So always make sure that you handle all errors inside your application and even if something is unhandled, user should only see an unknown error message and not a broken screen as above. 

Security Misconfiguration


This is another area where you don’t even require coding to screw it up. Application configurations are critical aspect of security and if something is deployed in production with local environment’s configuration is as good as giving you application to a hacker in silver plate with ribbon on it.
Below are some examples which can lead a leakage of your application information:-
  1. Deploying application with default username/passwords of third party components
  2. Deploying application in debug mode
  3. Keeping directory listing enabled
  4. Deploying application with security disabled for certain areas of application. 
  5.  Running outdated software or operating system
The best solution to overcome above issues is to automate the entire build process. Most of the deployment errors happen because the process is manual. Remember the quote “If there is an error, it’s human” perfectly fits here.

An automated process will keep the environments consistent and reduce the number of human/manual errors.

Understand the technology & understand the business


And finally, understand before you implement. I have seen people (majority in big organizations) where they only care about the module they are developing without the understanding of the entire end to end flow and the organization rules.  Developers are too removed from the business, especially if they do not work for the company whose website they are creating.  A developer will not be able to accurately model threats unless the developer is keenly aware of what the business objectives are and which critical information assets have to be protected by the application.

Same goes for technology. Web applications are changing rapidly and the tools to build those are changing even more quickly. Everybody involved in the web development process has to live up to the challenge to understand the security aspects of particular frameworks and development environments. This process is made harder if organizations try to chase the latest fad in web technology just to keep up with the industry.

Technology owners are trying hard to keep their products upto date with all latest threats and various programming languages are updating their modules to fight against these threats as well. If you understand the threat and the technology then all you have to do is to implement an out of the box solution to deal with those threats. Believe me, it require much less time than it sounds. All you need to know is what you are up against and what you have to fight it. 

Summary


Hopefully I gave you some idea about how implementing even small things in your application can save you from major hacks. Also, just to set the expectation right, we have't even scratched the surface of application security here. If you really wish to build a secure application then I am afraid there are no shortcuts. You will always have to be top of your game, stay up to date with various security threats and mitigations.

Sunday, June 14, 2015

SSO -- All we need is "Claims"

Bypass application security with right claims




Now-a-days security is all about tokens and claims. If you have the token with right claims, you can get into any application or service without anyone ever knowing about it. Sounds scary isn't? All the security you have ever implemented in the application can just be compromised by anyone if they get their hands on the token your application is getting from it's IDP.

Since I scared you enough now, let me just explain what I did. 

The requirements were to redirect user from a mobile App to one of our secured websites which uses ForgeRock OpenAM for authentication. And since the users are already logged in the mobile App, we never wanted those users getting redirected to OpenAM again for login when they hit our secured website. So in short, we wanted the website to consider the users logged in when they come from our mobile app without going to OpenAM even once. 

Now, there two ways you can achieve above. One is to override the authentication manager of the website to accept users when they come from mobile app. You can just give mobile users an identifier which tells the website that this is a mobile app user and let them in but here you are compromising the security of your application.

Second is to mimic the security which OpenAM and website is using to validate their users. This is comparatively more elegant approach as first you are using out of the box functionality of both OpenAM & website and secondly you are not compromising the security by introducing a loop hole.

In order to implement the second, you need to know how the authentication works between any identity provider & consumer. Below is a picture I found online to explain the process.




I am not going to explain Federated security here but at high level, when any identity provider (in our case OpenAM) authenticates the user. It issues a security token to them which contains claims. These claims are nothing but some information about the user issued by IDP post successful authentication and some information about the service you are trying to access. This part is very important and I will come back to this later.

So, if you can generate the same claims in a token and pass that token to the application then it really doesn’t cares whether you got that token from it’s IDP or you made it yourself. This is what we are going to do and you can use below .net code to achieve this with any identity provider. What claims your application expects in the token depends upon application to application but overall principal is same.

First, let's create some claims:-


//Create claims
var claims = new List<Claim>();
claims.Add(new Claim(ClaimTypes.NameIdentifier, "NAME-IDENTIFIER", "http://www.w3.org/2001/XMLSchema#string", "https://issuerUrl/", "https://issuerUrl/"));
claims.Add(new Claim("http://schemas.microsoft.com/identity/claims/tenantid", "TENANT-ID", "http://www.w3.org/2001/XMLSchema#string", "https://issuerUrl/", "https://issuerUrl/"));
claims.Add(new Claim("http://schemas.microsoft.com/identity/claims/objectidentifier", "OBJECT-IDENTIFIER", "http://www.w3.org/2001/XMLSchema#string", "https://issuerUrl/", "https://issuerUrl/"));
claims.Add(new Claim("http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name", "PiyushTest@ABC.com", "http://www.w3.org/2001/XMLSchema#string", "https://issuerUrl/", "https://issuerUrl/"));
 claims.Add(new Claim("http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname", "Gupta", "http://www.w3.org/2001/XMLSchema#string", "https://issuerUrl/", "https://issuerUrl/"));
 claims.Add(new Claim("http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname", "Piyush", "http://www.w3.org/2001/XMLSchema#string", "https://issuerUrl/", "https://issuerUrl/"));
claims.Add(new Claim("http://schemas.microsoft.com/identity/claims/displayname", "Piyush Gupta", "http://www.w3.org/2001/XMLSchema#string", "https://issuerUrl/", "https://issuerUrl/"));
claims.Add(new Claim("http://schemas.microsoft.com/identity/claims/identityprovider", "IDENTITY-PROVIDER-URL", "http://www.w3.org/2001/XMLSchema#string", "https://issuerUrl/", "https://issuerUrl/"));
claims.Add(new Claim("http://schemas.microsoft.com/accesscontrolservice/2010/07/claims/identityprovider", "IDENTITY-PROVIDER-URL", "http://www.w3.org/2001/XMLSchema#string", "https://issuerUrl/", "https://issuerUrl/"));
claims.Add(new Claim("http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod", "AUTHENTICATION-METHOD", "http://www.w3.org/2001/XMLSchema#string", "https://issuerUrl/", "https://issuerUrl/"));
claims.Add(new Claim("http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant", "2015-06-09T07:10:32.080Z", "http://www.w3.org/2001/XMLSchema#string", "https://issuerUrl/", "https://issuerUrl/"));

Now create an identity and add these claims to it:-

//create claims identity using above claims
var identity = new ClaimsIdentity(claims, DefaultAuthenticationTypes.ApplicationCookie, ClaimTypes.Name, ClaimTypes.Role);
        

Create ClaimsPrincipal and add the identity created above in it and set it in current thread because this is where your application will look for it.

//creating principal
ClaimsPrincipal principal = new ClaimsPrincipal(identity);
Thread.CurrentPrincipal = principal;

 And finally, use SessionSecurityToken class to write a cookie with all this data.

SessionSecurityToken sessionSecurityToken = new SessionSecurityToken(principal, TimeSpan.FromHours(8));
sessionSecurityToken.IsReferenceMode = true;
            FederatedAuthentication.SessionAuthenticationModule.WriteSessionTokenToCookie(sessionSecurityToken);

return Redirect("To-My-Secured-Website");

Once you have done above and set the right claims in the token, your application will silently accept this and let the user in without going to the identity provider for authentication.

Now, after you implement this don't think you application security is useless :) It's not.

Somewhere above I said, I will come back to this. Your application needs a token and claims in order to let the user's in. Now these claims and it's value is everything and not anyone can get their hands on them otherwise I just compromised my application by showing you above :)

As I mentioned earlier, half of these claims contains information about the user which only identity provider can give once you provide your correct username/password. Rest of the information is of your application which either you have access or someone else can if they hack into your server. Now both of these are big IFs and if someone has correct username/password of the user or hack into your servers then creating false claims would be the last thing they would plan to do with it :)

Hope you enjoyed this hack!