Fraud & Identity Management

Loading...

There seems to be two viewpoints in the market today about Knowledge Based Authentication (KBA): one positive, one negative.  Depending on the corner you choose, you probably view it as either a tool to help reduce identity theft and minimize fraud losses, or a deficiency in the management of risk and the root of all evil.  The opinions on both sides are pretty strong, and biases “for” and “against” run pretty deep. One of the biggest challenges in discussing Knowledge Based Authentication as part of an organization’s identity theft prevention program, is the perpetual confusion between dynamic out-of-wallet questions and static “secret” questions.  At this point, most people in the industry agree that static secret questions offer little consumer protection.  Answers are easily guessed, or easily researched, and if the questions are preference based (like “what is your favorite book?”) there is a good chance the consumer will fail the authentication session because they forgot the answers or the answers changed over time. Dynamic Knowledge Based Authentication, on the other hand, presents questions that were not selected by the consumer.  Questions are generated from information known about the consumer – concerning things the true consumer would know and a fraudster most likely wouldn’t know.  The questions posed during Knowledge Based Authentication sessions aren’t designed to “trick” anyone but a fraudster, though a best in class product should offer a number of features and options.  These may allow for flexible configuration of the product and deployment at multiple points of the consumer life cycle without impacting the consumer experience. The two are as different as night and day.  Do those who consider “secret questions” as Knowledge Based Authentication consider the password portion of the user name and password process as KBA, as well?  If you want to hold to strict logic and definition, one could argue that a password meets the definition for Knowledge Based Authentication, but common sense and practical use cause us to differentiate it, which is exactly what we should do with secret questions – differentiate them from true KBA. KBA can provide strong authentication or be a part of a multifactor authentication environment without a negative impact on the consumer experience.  So, for the record, when we say KBA we mean dynamic, out of wallet questions, the kind that are generated “on the fly” and delivered to a consumer via “pop quiz” in a real-time environment; and we think this kind of KBA does work.  As part of a risk management strategy, KBA has a place within the authentication framework as a component of risk- based authentication… and risk-based authentication is what it is really all about.  

Published: March 5, 2010 by Monica Pearson

When a client is selecting questions to use, Knowledge Based Authentication is always about the underlying data – or at least it should be.  The strength of Knowledge Based Authentication questions will depend, in large part, on the strength of the data and how reliable it is.  After all, if you are going to depend on Knowledge Based Authentication for part of your risk management and decisioning strategy the data better be accurate.  I’ve heard it said within the industry that clients only want a system that works and they have no interest where the data originates.  Personally, I think that opinion is wrong. I think it is closer to the truth to say there are those who would prefer if clients didn’t know where the data that supports their fraud models and Knowledge Based Authentication questions originates; and I think those people “encourage” clients not to ask.  It isn’t a secret that many within the industry use public record data as the primary source for their Knowledge Based Authentication products, but what’s important to consider is just how accessible that public record information is.  Think about that for a minute.  If a vendor can build questions on public record data, can a fraudster find the answers in public record data via an online search? Using Knowledge Based Authentication for fraud account management is a delicate balance between customer experience/relationship management and risk management.  Because it is so important, we believe in research – reading the research of well-known and respected groups like Pew, Tower, Javelin, etc. and doing our own research.  Based on our research, I know consumers prefer questions that are appropriate and relative to their activity.  In other words, if the consumer is engaged in a credit-granting activity, it may be less appropriate to ask questions centered on personal associations and relatives.  Questions should be difficult for the fraudster, but not difficult or perceived as inappropriate or intrusive by the true consumer.  Additionally, I think questions should be applicable to many clients and many consumers.  The question set should use a mix of data sources: public, proprietary, non-credit, credit (if permissible purpose exists) and innovative. Is it appropriate to have in-depth data discussions with clients about each data source?  Debatable.  Is it appropriate to ensure that each client has an understanding of the questions they ask as part of Knowledge Based Authentication and where the data that supports those questions originates?  Absolutely.    

Published: March 2, 2010 by Monica Pearson

My last entry covered the benefits of consortium databases and industry collaboration in general as a proven and technologically feasible method for combating fraud across industries.  They help minimize fraud losses.  So – with some notable exceptions – why are so few industries and companies using fraud consortiums and known fraud databases? In my experience, the reasons typically boil down to two things: reluctance to share data and perception of ROI.  I say \"perception of ROI\" because I firmly believe the ROI is there – in fact it grows with the number of consortium participants. First, reluctance to share data seems to stem from a few areas. One is concern for how that data will be used by other consortium members.  This is usually addressed through compelling reciprocation of data contribution by all members (the give to get model) as well as strict guidelines for acceptable use. In today’s climate of hypersensitivity, another concern – rightly so – is the stewardship of Personally Identifiable Information (PII).  Given the potentially damaging effects of data breaches to consumers and businesses, smart companies are extremely cautious and careful when making decisions about safeguarding consumer information.  So how does a data consortium deal with this?  Firewalls, access control lists, encryption, and other modern security technologies provide the defenses necessary to facilitate protection of information contributed to the consortium. So, let’s assume we’ve overcome the obstacles to sharing one’s data.  The other big hurdle to participation that I come across regularly is the old “what’s in it for me” question.  Contributors want to be sure that they get out of it what they put into it.  Nobody wants to be the only one, or the largest one, contributing records. In fact, this issue extends to intracompany consortiums as well.  No line of business wants to be the sole sponsor just to have other business units come late to the party and reap all the benefits on their dime.  Whether within companies or across an industry, it’s obvious that mutual funding, support, equitable operating rules, and clear communication of benefits – to those contributors both big and small – is necessary for fraud consortiums to succeed. To get there, it’s going to take a lot more interest and participation from industry leaders.  What would this look like? I think we’d see a large shift in companies’ fraud columns: from “Discovered” to “Attempted”.  This shift would save time and money that could be passed back to the legitimate customers.  More participation would also enable consortiums to stay on top of changing technology and evolving consumer communication styles, such as email, text, mobile banking, and voice biometrics to name a few.  

Published: February 8, 2010 by Matt Ehrlich

There was a recent discussion among members of the Anti Fraud experts group on LinkedIn regarding collaboration among financial institutions to combat fraud.  Most posters agreed on the benefits of such collaboration but were cynical when it came to anything of substance, such as a shared data network, getting off the ground.  I happen to agree with some of the opinions on the primary challenges faced in getting cross industry (or even single industry!) cooperation to prevent both consumer and commercial fraud.  Those being: 1) sharing data and 2) return on investment. Despite the challenges, there are some fraud prevention and “negative” file consortium databases available in the market as fraud prevention tools.  They’re often used in conjunction with authentication products in an overall risk based authentication / fraud deterrence strategy. Some are focused on the Demand Deposit Account (DDA) market, such as Fidelity’s DebitBureau, while others, like Experian’s own National Fraud Database, address a variety of markets.  Early Warning Services has a database of both “account abuse” – aka DDA financial mismanagement – and fraud records.  Still others like Ethoca and the UK’s 192.com seem focused on merchant data and online retailers. Regardless of the consortium, they share some common traits.  Most: - fall under Fair Credit Reporting Act regulation - are used in the acquisition phase as part of the new account decision - require contribution of data to access the shared data network Given the seemingly general reluctance to participate in fraud consortiums, as evidenced by the group described above, how do we assess value in these consortium databases?  Well, for one, most U.S. banks and credit unions participate in and contribute customer behavior data to a consortium.  Safe to say, then, that the banking industry has recognized the value of collaboration and sharing data with each other – if not exclusively to minimize fraud losses but at least to manage potential risk at acquisition.  I’m speaking here of the DDA financial mismanagement data used under the guiding principle of “past performance predicts future results”. Consortium data that includes confirmed fraud records make the value of collaboration even more clear: a match to one of these records compels further investigation and a more cautious review of the transaction or decision.  With this much to gain, why aren’t more companies and industries rushing to join or form a consortium? In my next post, I’ll explore the common objections to joining consortiums and what the future may look like.  

Published: February 5, 2010 by Matt Ehrlich

By: Ken Pruett I thought it might be helpful to give an example of a recent performance monitoring engagement to show just how the performance monitoring process can help.  The organization to which I\'m referring has been using Knowledge Based Authentication for several years. They are issuing retail credit cards for their online channel. This is an area that usually experiences a higher rate of fraud.  The Knowledge Based Authentication product is used prior to credit being issued. The performance monitoring process involved the organization providing us with a sample of approximately 120,000 records of which some were good and some were bad.  Analysis showed that they had a 25 percent referral rate -- but they were concerned about the number of frauds they were catching.  They felt that too many frauds were getting through; they believed the fraud process was probably too lenient. Based on their input, we started a detailed analytic exercise with the intention, of course, to minimize fraud losses.  Our study found that, by changing several criteria items with the set-up, the organization was able to get the tool to be more in-line with expectations.  So, by lowering the pass rate by only 9 percent they increased their fraud find rate by 27 percent.  This was much more in-line with their goals for this process. In this situation, a score was being used, in combination with the organization\'s customer\'s ability to answer questions, to determine the overall accept or refer decision.  The change to the current set-up involved requiring customers to answer at least one more question in combination with certain scores.  Although the change was minor in nature, it yielded fairly significant results. Our next step in the engagement involved looking at the questions. Analysis showed that some questions should be eliminated due to poor performance.  They were not really separating fraud; so, removing them would be beneficial to the overall process.  We also determined that some questions performed very well.  We recommended that these questions should carry a higher weight in the overall decision process.  An example would be that a customer be required to answer only two questions correct for the higher weighted questions versus three of the lesser performing questions.  The key here is to help keep pass rates up while still preventing fraud.  Striking this delicate balance is the key objective. As you can see from this example, this is an ongoing process, but the value in that process is definitely worth the time and effort.

Published: January 29, 2010 by Guest Contributor

Meat and potatoes Data are the meat and potatoes of fraud detection.  You can have the brightest and most capable statistical modeling team in the world.  But if they have crappy data, they will build crappy models.  Fraud prevention models, predictive scores, and decisioning strategies in general are only as good as the data upon which they are built. How do you measure data performance? If a key part of my fraud risk strategy deals with the ability to match a name with an address, for example, then I am going to be interested in overall coverage and match rate statistics.  I will want to know basic metrics like how many records I have in my database with name and address populated.  And how many addresses do I typically have for consumers?  Just one, or many?  I will want to know how often, on average, we are able to match a name with an address.  It doesn’t do much good to tell you your name and address don’t match when, in reality, they do. With any fraud product, I will definitely want to know how often we can locate the consumer in the first place.  If you send me a name, address, and social security number, what is the likelihood that I will be able to find that particular consumer in my database?  This process of finding a consumer based on certain input data (such as name and address) is called pinning.  If you have incomplete or stale data, your pin rate will undoubtedly suffer.  And my fraud tool isn’t much good if I don’t recognize many of the people you are sending me. Data need to be fresh.  Old and out-of-date information will hurt your strategies, often punishing good consumers.  Let’s say I moved one year ago, but your address data are two-years old, what are the chances that you are going to be able to match my name and address?  Stale data are yucky. Quality Data = WIN It is all too easy to focus on the more sexy aspects of fraud detection (such as predictive scoring, out of wallet questions, red flag rules, etc.) while ignoring the foundation upon which all of these strategies are built.  

Published: January 20, 2010 by Andrew Gulledge

By: Ken Pruett The use of Knowledge Based Authentication (KBA) or out of wallet questions continues to grow. For many companies, this solution is used as one of its primary means for fraud prevention.  The selection of the proper tool often involves a fairly significant due diligence process to evaluate various offerings before choosing the right partner and solution.  They just want to make sure they make the right choice. I am often surprised that a large percentage of customers just turn these tools on and never evaluate or even validate ongoing performance.  The use of performance monitoring is a way to make sure you are getting the most out of the product you are using for fraud prevention.  This exercise is really designed to take an analytical look at what you are doing today when it comes to Knowledge Based Authentication. There are a variety of benefits that most customers experience after undergoing this fraud analytics exercise.  The first is just to validate that the tool is working properly.  Some questions to ponder include: Are enough frauds being identified? Is the manual review rate in-line with what was expected?  In almost every case I have worked on as it relates to these engagements, there were areas that were not in-line with what the customer was hoping to achieve.  Many had no idea that they were not getting the expected results. Taking this one step further, changes can also be made to improve upon what is already in place.  For example, you can evaluate how well each question is performing.  The analysis can show you which questions are doing the best job at predicting fraud.  The use of better performing questions can allow you the ability to find more fraud while referring fewer applications for manual review.  This is a great way to optimize how you use the tool. In most organizations there is increased pressure to make sure that every dollar spent is bringing value to the organization.  Performance monitoring is a great way to show the value that your KBA tool is bringing to the organization.  The exercise can also be used to show how you are proactively managing your fraud prevention process.   You accomplish this by showing how well you are optimizing how you use the tool today while addressing emerging fraud trends. The key message is to continuously measure the performance of the KBA tool you are using.  An exercise like performance monitoring could provide you with great insight on a quarterly basis.  This will allow you to get the most out of your product and help you keep up with a variety of emerging fraud trends. Doing nothing is really not an option in today’s even changing environment.  

Published: January 18, 2010 by Guest Contributor

Conducting a validation on historical data is a good way to evaluate fraud models; however, fraud best practices dictate that a proper validation uses properly defined fraud tags. Before you can determine if a fraud model or fraud analytics tool would have helped minimize fraud losses, you need to know what you are looking for in this category.  Many organizations have difficulty differentiating credit losses from fraud losses.  Usually, fraud losses end up lumped-in with credit losses. When this happens, the analysis either has too few “known frauds” to create a business case for change, or the analysis includes a large target population of credit losses that result in poor results. By planning carefully, you can avoid this pitfall and ensure that your validation gives you the best chance to improve your business and minimize fraud losses. As a fraud best practice for validations, consider using a target population that errs on the side of including credit losses; however, be sure to include additional variables in your sample that will allow you and your fraud analytics provider to apply various segmentations to the results.  Suggested elements to include in your sample are; delinquency status, first delinquency date, date of last valid payment, date of last bad  payment and indicator of whether the account was reviewed for fraud prior to booking. Starting with a larger population, and giving yourself the flexibility to narrow the target later will help you see the full value of the solutions you evaluate and reduce the likelihood of having to do an analysis over again.  

Published: January 13, 2010 by Chris Ryan

In a previous blog, we shared ideas for expanding the “gain” to create a successful ROI to adopt new fraud best practices  to improve.  In this post, we’ll look more closely at the “cost” side of the ROI equation. The cost of the investment- The costs of fraud analytics and tools that support fraud best practices go beyond the fees charged by the solution provider.  While the marketplace is aware of these costs, they often aren’t considered by the solution providers.  Achieving consensus on an ROI to move forward with new technology requires both parties to account for these costs.  A more robust ROI should these areas: • Labor costs- If a tool increases fraud referral rates, those costs must be taken into account. • Integration costs- Many organizations have strict requirements for recovering integration costs.  This can place an additional burden on a successful ROI. • Contractual obligations- As customers look to reduce the cost of other tools, they must be mindful of any obligations to use those tools. • Opportunity costs- Organizations do need to account for the potential impact of their fraud best practices on good customers.  Barring a true champion/challenger evaluation, a good way to do this is to remain as neutral as possible with respect to the total number of fraud alerts that are generated using new fraud tools compared to the legacy process As you can see, the challenge of creating a compelling ROI can be much more complicated than the basic equation suggests.  It is critical in many industries to begin exploring ways to augment the ROI equation.  This will ensure that our industries evolve and thrive without becoming complacent or unable to stay on top of dynamic fraud trends.  

Published: January 11, 2010 by Chris Ryan

By definition, “Return on Investment” is simple: (The gain from an investment - The cost of the investment) _______________________________________________ The cost of the investment With such a simple definition, why do companies that develop fraud analytics and their customers have difficulty agreeing to move forward with new fraud models and tools?   I believe the answer lies in the definition of the factors that make up the ROI equation: “The gain from an investment”- When it comes to fraud, most vendors and customers want to focus on minimizing fraud losses.  But what happens when fraud losses are not large enough to drive change? To adopt new technology it’s necessary for the industry to expand its view of the “gain.”  One way to expand the “gain” is to identify other types of savings and opportunities that aren’t currently measured as fraud losses.  These include: Cost of other tools - Data returned by fraud tools can be used to resolve Red Flag compliance discrepancies and help fraud analysts manage high-risk accounts.  By making better use of this information, downstream costs can be avoided. Other types of “bad” organizations are beginning to look at the similarities among fraud and credit losses.  Rather than identifying a fraud trend and searching for a tool to address it, some industry leaders are taking a different approach -- let the fraud tool identify the high-risk accounts, and then see what types of behavior exist in that population.  This approach helps organizations create the business case for constant improvement and also helps them validate the way in which they currently categorize losses. To increase cross sell opportunities - Focus on the “good” populations.  False positives aren’t just filtered out of the fraud review work flow, they are routed into other work flows where relationships can be expanded.    

Published: January 4, 2010 by Chris Ryan

By: Heather Grover In my previous entry, I covered how fraud prevention affected the operational side of new DDA account opening. To give a complete picture, we need to consider fraud best practices and their impact on the customer experience. As earlier mentioned, the branch continues to be a highly utilized channel and is the place for “customized service.” In addition, for retail banks that continue to be the consumer\'s first point of contact, fraud detection is paramount IF we should initiate a relationship with the consumer. Traditional thinking has been that DDA accounts are secured by deposits, so little risk management policy is applied. The reality is that the DDA account can be a fraud portal into the organization’s many products. Bank consolidations and lower application volumes are driving increased competition at the branch – increased demand exists to cross-sell consumers at the point of new account opening. As a result, banks are moving many fraud checks to the front end of the process: know your customer and Red Flag guideline checks are done sooner in the process in a consolidated and streamlined fashion. This is to minimize fraud losses and meet compliance in a single step, so that the process for new account holders are processed as quickly through the system as possible. Another recent trend is the streamlining of a two day batch fraud check process to provide account holders with an immediate and final decision. The casualty of a longer process could be a consumer who walks out of your branch with a checkbook in hand – only to be contacted the next day to tell that his/her account has been shut down. By addressing this process, not only will the customer experience be improved with  increased retention, but operational costs will also be reduced. Finally, relying on documentary evidence for ID verification can be viewed by some consumers as being onerous and lengthy. Use of knowledge based authentication can provide more robust authentication while giving assurance of the consumer’s identity. The key is to use a solution that can authenticate “thin file” consumers opening DDA accounts. This means your out of wallet questions need to rely on multiple data sources – not just credit. Interactive questions can give your account holders peace of mind that you are doing everything possible to protect their identity – which builds the customer relationship…and your brand.  

Published: January 4, 2010 by Guest Contributor

By: Heather Grover In past client and industry talks, I’ve discussed the increasing importance of retail branches to the growth strategy of the bank. Branches are the most utilized channel of the bank and they tend to be the primary tool for relationship expansion. Given the face-to-face nature, the branch historically has been viewed to be a relatively low-risk channel needing little (if any) identity verification – there are less uses of robust risk-based authentication or out of wallet questions. However, a now well-established fraud best practice is the process of doing proper identity verification and fraud prevention at the point of DDA account opening. In the current environment of declining credit application volumes and approval across the enterprise, there is an increased focus on organic growth through deposits.  Doing proper vetting during DDA account openings helps bring your retail process closer in line with the rest of your organization’s identity theft prevention program. It also provides assurance and confidence that the customer can now be cross-sold and up-sold to other products. A key industry challenge is that many of the current tools used in DDA are less mature than in other areas of the organization. We see few clients in retail that are using advanced fraud analytics or fraud models to minimize fraud – and even fewer clients are using them to automate manual processes - even though more than 90 percent of DDA accounts are opened manually. A relatively simple way to improve your branch operations is to streamline your existing ID verification and fraud prevention tool set: 1. Are you using separate tools to verify identity and minimize fraud? Many providers offer solutions that can do both, which can help minimize the number of steps required to process a new account; 2. Is the solution realtime? To the extent that you can provide your new account holders with an immediate and final decision, the less time and effort you’ll spend after they leave the branch finalizing the decision; 3. Does the solution provide detail data for manual review? This can help save valuable analyst time and provider costs by limiting the need to do additional searches. In my next post, we’ll discuss how fraud prevention in DDA impacts the customer experience.

Published: December 30, 2009 by Guest Contributor

The definition of account management authentication is:  Keep your customers happy, but don’t lose sight of fraud risks and effective tools to combat those risks. In my previous posting, I discussed some unique fraud risks facing institutions during the account management phase of their customer lifecycles.  As a follow up, I want to review a couple of effective tools that allow you to efficiently minimize fraud losses during post-application: Knowledge Based Authentication (KBA) — this process involves the use of challenge/response questions beyond \"secret\" or \"traditional\" internally derived questions (such as mother\'s maiden name or last transaction amount). This tool allows for measurably effective use of questions based on more broad-reaching data (credit and noncredit) and consistent delivery of those questions without subjective question creation and grading by call center agents. KBA questions sourced from information not easily accessible by call center agents or fraudsters provide an additional layer of security that is more impenetrable by social engineering. From a process efficiency standpoint, the use of automated KBA also can reduce online sessions for consumers, and call times as agents spend less time self-selecting questions, self-grading responses and subjectively determining next steps. Delivery of KBA questions via consumer-facing online platforms or via interactive voice response (IVR) systems can further reduce operational costs since the entire KBA process can be accommodated without call center agent involvement. Negative file and fraud database – performing checks against known fraudulent and abuse records affords institutions an opportunity to, in batch or real time, check elements such as address, phone, and SSN for prior fraudulent use or victimization.  These checks are a critical element in supplementing traditional consumer authentication processes, particularly in an account management procedure in which consumer and/or account information may have been compromised.  Transaction requests such as address or phone changes to an account are particularly low-hanging fruit as far as running negative file checks are concerned.    

Published: December 28, 2009 by Keir Breitenfeld

--by Andrew Gulledge Intelligent use of features Question ordering: You want some degree of randomization in the questions that are included for each session. If a fraudster (posing as you) comes through Knowledge Based Authentication, for two or three sessions, wouldn’t you want them to answer new questions each time? At the same time, you want to try to use those questions that perform better more often. One way to achieve both is to group the questions into categories, and use a fixed category ordering (with the better-performing categories being higher up in the batting line up)—then, within each category, the question selection is randomized. This way, you can generally use the better questions more, but at the same time, make it difficult to come through Knowledge Based Authentication twice and get the same questions presented back to you. (You can also force all new questions in subsequent sessions, with a question exclusion strategy, but this can be restrictive and make the “failure to generate questions” rate spike.) Question weighting: Since we know some questions outperform others, both in terms of percentage correct and in terms of fraud separation, it is generally a good idea to weight the questions with points based on these performance metrics. Weighting can help to squeeze out some additional fraud detection from your Knowledge Based Authentication tool.  It also provides considerable flexibility in your decisioning (since it is no longer just “how many questions were answered correctly” but it is “what percentage of points were obtained”). Usage Limits: You should only allow a consumer to come through the Knowledge Based Authentication process a certain number of times before getting an auto-fail decision. This can take the form of x number of uses allowable within y number of hours/days/etc. Time out Limit: You should not allow fraudsters to research the questions in the middle of a Knowledge Based Authentication session. The real consumer should know the answers off the top of their heads. In a web environment, five minutes should be plenty of time to answer three to five questions. A call center environment should allow for more time since some people can be a bit chatty on the phone.  

Published: December 22, 2009 by Andrew Gulledge

Account management fraud risks: I “think” I know who I’m dealing with… Risk of fraudulent account activity does not cease once an application has been processed with even the most robust authentication products and tools available.  These are a few market dynamics are contributing to increased fraud risk to existing accounts: -          The credit crunch is impacting bad guys too! Think it’s hard to get approved for a credit account these days? The same tightened lending practices good consumers now face are also keeping fraudsters out of the “application approval” process too. While that may be a good thing in general, it has caused a migratory focus from application fraud to account takeover fraud.  -          Existing and viable accounts are now much more appealing to fraudsters given a shortage of application fraud opportunities, as financial institutions have reduced solicitation volume. A few other interesting challenges face organizations with regards to an institution’s ability to minimize fraud losses related to existing accounts: -  Social engineering — the \"human element\" is inherent in a call center environment and critical from a customer experience perspective. This factor offers the opportunity for fraudsters to manipulate representatives to either gain unauthorized access to accounts or, at the very least, collect consumer and account information that may help them perpetrate fraud later. - Automatic Number Identification (ANI) spoofing — this technology allows a caller to alter the true displayable number from which he or she is calling to a falsely portrayed number. It\'s difficult, if not impossible, to find a legitimate use for this technology. However, fraudsters find this capability quite useful as they try to circumvent what was once a very effective method of positively authenticating a consumer based on a \"good\" or known incoming phone number. With ANI spoofing in play, many call centers are now unable to confidently rely on this once cost-effective and impactful method of authenticating consumers.    

Published: December 21, 2009 by Keir Breitenfeld

Subscription title for insights blog

Description for the insights blog here

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Categories title

Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book.

Subscription title 2

Description here
Subscribe Now

Text legacy

Contrary to popular belief, Lorem Ipsum is not simply random text. It has roots in a piece of classical Latin literature from 45 BC, making it over 2000 years old. Richard McClintock, a Latin professor at Hampden-Sydney College in Virginia, looked up one of the more obscure Latin words, consectetur, from a Lorem Ipsum passage, and going through the cites of the word in classical literature, discovered the undoubtable source.

recent post

Learn More Image

Follow Us!