5,000+ subscribers can't be wrong!

We'll share our best insights on how to get the most out of IGUANA.

Sign up today to receive updates twice a month:

FHIR Integration Made Easy

At the moment the biggest challenge with FHIR is making it easy to implement a “FHIR server”. It’s easy to write a client to consume FHIR. It gets a little challenging to make a client that can talk to more than one FHIR implementation.

We don’t yet have the tooling in place to validate FHIR easily.

There are some ideas on the table to try and do FHIR conformance:


But it looks very complex (well at least to me!).

Here is a simpler idea.

Have a look at this page in our wiki on HL7 conformance:


Is shows a model of how one can do high quality conformance validation over a package of data. In this case the package of data happens to be a Version 2 HL7 message, but in FHIR it would a JSON or possibly an XML payload.

Why You Need A Modern Integration EngineNow the cool thing about this is that we don’t have to use special tools to do the validation. If you are a Java fan – write thevalidation of a resource in Java. If you like C#, use C#. If you like Iguana then write it in Lua with the translator.  If you love Corepoint or Rhapsody then use those engines. If you like Groovy use that…you get the picture.

The point is that for the user of the validator they don’t have to care what technology you have implemented the validation logic in. Your data goes in and nice messages come back in terms of telling you what you got wrong.

There are all sorts of lovely things that drop out of this model:

  1. Effort for writing validators can be distributed – which is great for a volunteer organization like HL7.
  2. It makes it possible for special interest groups that have a need for a particular profile to implement and validate there own profile.

It would be possible to do this in a decentralized model. Over time one could get natural selection of what the high quality useful resources that lots of people choose to implement. If someone at the Cleveland Clinic wants to make a great resource for doing a questionnaire then they can go ahead and do it. If other people like it then they too can adopt it.

The beauty of this is that it can be done quickly – using the existing FHIR specification.

HL7 could set up a little light-weight central server for people to register their conformance servers – we need a catchy name…. FHIR Button?

What do you think?

Why You Need A Modern Integration Engine

Don't miss out!

Join 5000+ subscribers in getting the best tips & advice.


The model is really about opening up and democratizing the standards process by lowering the transaction costs in collaborating. V3 and CDA proponents could register resources and see who uses them. Over time natural selection will occur – people will gravitate to the resources that are most common and useful.

So for instance the investments that people have made over the years with Schematron and V3 should be allowed to co-exist with other resources – if people like these resources and find value in them then they will get adoption. Give people freedom to choose and the cream should rise to the top in terms of what resources are really helpful.

Some resources will be easier to do quality validation on. This will impact on the value that people see in those resources – for instance very large CDA documents are difficult to validate effectively so that may impact on the value that people see in them.

This approach makes perfect sense to me.

And since it takes immediate and direct advantage of ‘specialized’ code* rather than relying on a combination of external resources like rules and rule engines, it may be a perfect fit with my current work to implement a RESTful FHIR server that supports the 6th (optional) component of REST, “Code on Demand” — with validation code potentially delivered as part of the payload.

(Grahame, Ewout: if you’re reading this, you may recall my talking about supporting the ‘Code on Demand’ aspect of REST with you at the first FHIR Connectathon in Baltimore, as a better way of supporting FHIR Extensions)


Ah okay so writing the validation code in Javascript … which would work if you pick a format like JSON which of course plays well with Javascript…? Although it would only with in HTML front ends …. although HTML is very ubiquitous.

An open ended system allows experiments like that to be tried and if they get taken up why not…

It might work.

@emuir RE “writing the validation code in Javascript ..would work if you pick a format like JSON”

Not really — the choice to use JavaScript code would be based mostly on the ability to execute that code immediately in a browser, but I’ve done quite a bit of XML processing using JavaScript in the past.

Remember: ‘AJAX’ stands for “Asynchronous JavaScript and XML”.

Exchanging and processing XML was how we did things with JavaScript in the browser at first, before JSON emerged as a better alternative.


I’m just sitting on a group that is looking at device integration with FHIR. It’s a beautiful example of how problematic it is trying to make centrally defined standards to handle device data.

There are so many different types of devices and the area is always changing. On the other side most of the EMRs don’t have the capability to display the data.

I think we’ve just got to let the device vendors go at it – unleash whatever data they have – make it visible and open up that data to creative developers to visualize that data.

Not to either trivialize the problem or oversimplify the solution, but in the broadest strokes, the only answer to this is to be more dynamic.

I’m going on the record here as saying that things have gotten way too complex to continue trying to solve integration problems by attempting to determine everything in advance so that we can prescribe a static, pre-defined solution that will work “for most” (80/20) or “in most cases”.

That may have (almost) worked 25 years ago, but it’s a failed approach today. The next generation integration standards need to fully embrace a completely dynamic approach if we’re going to truly achieve “drive-by interoperability”.

We need smarter, more intelligent code, NOT better static models and data structures.

Hi Eliot,

Not sure what you mean by “we don’t have the tooling in place to validate FHIR thoroughly”. On the XML side, we have both schema and schematron. We also have reference servers that perform the equivalent validation on JSON. We have generated code that handles resources in Jason, C#, Delphi and eCORE (with development of additional code generators welcomed). So I’m not sure what more you think is needed.

The Conformance resource has little to do with validation (that’s what the Profile resource is for – it’s even more complicated, by necessity). The conformance resource is about discovery, auto-configuration and consistent documentation of application behavior.

@thomas – I agree – for reference here is the device resources here:


It’s so very general and yet there are so many different devices. I know in UHN they have specialized EMRs which are used for different kinds of devices like EKGs. The real world of healthcare data is very diverse and varied. The problem is that it is quite opaque to most people what the structure of the data is – if you are a bright young entrepreneur that wants to change the world in how to visualize and use this data you will have a hard time getting information on what the data is.

@Lloyd – You mentioned offline I would use a tool called sprinkler to test it – it throws a wide variety of resources at your server and ensures that the server does what it says.

Sounds good – where is the information on this tool?


The last few connectathons I brought a small application (currently a .NET exe) with me called Sprinkler. I point Sprinkler at one of the connectathon participants and it will fire a set of test messages to the server to test its conformance. Our plan is to move that test code to a webserver, so you can go to the Sprinkler website, type your server’s name and be tested.

I figure your idea about distributed testing is also about using a distributed approach to do instance validation. We have a REST validate operation in place that you can use to bring an instance to a server and have it be validated. I always envisaged that this was going to be used to do a validation of an instance before submitting it “for real” to that same server.

It’s interesting to think about using it in a slightly different way: FHIR instances could carry a validation URL in their metadata (we’re using Atom to transport instances, so using atom entry’s might be a place to do this) to refer back to one or more endpoints where the profile and businessrules that are applicable to that instance can be reached and where you can get back at to get your instance validated.

I’d have to think a bit more about whether this link is just a nice functional way to validate your instance, or whether it is actually some statement of conformance (“I mean this instance to conform to the rules for this (profiled) resource as defined on that server”), and how this aligns with our discussion of having profiles and profile identifiers on instances, but it’s something worth pursueing a bit further.


This is definitely not a criticism or complaint in any sense, but my overall, gut reaction to this discussion so far (especially after reading your remarks) is that at least at a very high level, we’re not really escaping the need for “local agreements” — it’s more like we’re finding a way to automate them.

And if that’s *all* that we’re able to do initially with FHIR, then I think that we’ll have accomplished quite a lot.

@emuir, @Lloyd RE “You mentioned offline..”

It’s not fair to the rest of us participating (and investing time) in this discussion if @Lloyd’s answer to @emuir’s question is happening offline. I for one would like to see it.

@Thomas. I agree – the process and discussion about FHIR does need to be transparent. In some ways I think we need to see some cultural evolution in HL7 to a model where is there is a much more open process that reflects what is possible in today’s age with blogs, social networking that makes for more accountability.

Within that vein, I want to be really up front and totally honest about my own motivations:

1) Sell more Iguana!!
2) It’s helpful to be involved in debate on interoperability since it’s an issue which affects all the people I would like to sell to.
3) There will be more demand for a product like Iguana in a healthcare IT world where data flows more easily because
4) Iguana is a really solid product that delivers value in it’s own right – it’s even more useful when there are well defined publicly documented APIs which decrease the cost of integration and make more projects possible to deliver value integrating data.

I personally just don’t see the need in making the healthcare standards any more complicated than they need to be because the complexity and problems of the healthcare business itself provide more than enough craziness. Also crazy healthcare standards tend to help competing vendors that are willing to hold their nose and make specialized tools to deal with the niches they helped to create. I am confident in the value of Iguana that I don’t need to contribute to the craziness. Making things simpler suits my agenda.

At a personal level of someone who pays a lot taxes (Canadian taxes are high) I want my tax money to be spent well, I would like to see my healthcare experience as a patient and for my family get better with better inter-operability in healthcare. I am fortunate that my business interest co-incide with my personal needs.

Truth is that there are lots of incentives for different economic actors in healthcare to make interoperability and standards more complicated. The reality is that some times these create barriers to entry for competitors. What happens though is at some point a tipping point occurs when suddenly it becomes in everyone’s interest to make inter-operability easier rather than harder.

You have to look at the players realistically to try understand what their motivations are. Follow the money. One area that people make money in is being a consultant to government organizations in terms of helping them with developing interoperability with all the various programs they want to do. There are perverse incentives to make the standards a little complex and obtuse because it creates barriers to entry for competitors to do the work you want to do.

Of course you can have too much of a good thing. One beautiful example that was the NHS Spine. I had the joy of reading the spec on that one time to see if we could implement it. It was a nutty amalgam of LDAP, V3, ebXML combined into an impenetrable mess.

There were consulting companies associated with the development of that spec than had then developed products to implement connection to it. They were a little too effective though in the sense I think in the end the NHS saw through it all and realized that it was too complicated to succeed. It was a classic example of ‘consultant capture’. NHS Spine II is coming soon – “Resurrection” dun dun – we’ll see if they manage to avoid the same trap – I met someone from NHS Connecting to Health at Atlanta on Wednesday and forwarded them James Agnew’s HL7 over HTTP proposed standard – I hope they go for it since it would help to increase the value that Iguana could play in the NHS.

To change a market you just need to move in a direction where all the players have to start changing their game.


I nudged Ewout to provide the official response seeing as it was his tool. Realistically, there’s always going to be multiple communication paths. Those at the WGM or Connectathon are going to have more “immediate” information than those who are monitoring remotely, but we do try to ensure that information propagates as rapidly as possible.

One of the things we’re looking at doing is moving FHIR implementation discussions to funnel FHIR implementation questions through Stack Overflow. (Look for an announcement on Grahame’s blog soon). That should hopefully help with both the decimation as well as the institutional memory on FHIR implementation-related topics

Well, I don’t have my head deeply into FHIR. That having been said, one field sticks out to me in my role in Laboratory Informatics. That would be the “UnitsOfMeasure”. It makes sense from a coder’s perspective (strong typing and all) to specify this in Capabilities. This is all good if you take this data AS ONLY A HINT. The actual result that comes to you should specify all of the data necessary for interpretation as an atomic entry.
A result must contain the Observation Data, the date and time of specimen collection, the units of measure, and (Panic High | High | Low | Panic Low | Absurd) flags. That is the bare minimum necessary to have a valid result.
It is wrong to assume that even 5 minutes later that the units of measure will be the same as when you polled the (Machine | System). Methods of testing and units of measure can and will change at a moment’s notice. A different reagent pack or testing pack in an instrument can mean a different Unit of Measure in a heartbeat.

@Ewout that would be a step in the right direction. Making the tool so that it can be accessed through a web browser anywhere in the world and accessible to people working remotely. You could take it a step further and just make it part of the implementation of a loosely coupled specific utility to validate a resource.

For FHIR to have a serious chance of success there is no point sugar coating these issues. Best to bring everything out explicitly and deal with it. These questions will be asked quietly and privately if they are not addressed openly in this manner.

If FHIR to be successful then it has to be like HL7 Version 2.X in that all participants have to able to only use the tools they are already comfortable with. One of the numerous reasons that V3 failed to get widespread market adoption was that it required a large investment in time to learn all the tools that were developed internally by HL7. Most people simply don’t have the time.

You need lots and lots of people to be involved with FHIR to make it successful. And to do that you have to figure out some smart ways to lower the barriers to entry and getting people that might have a very slight interest in FHIR the opportunity to quickly be involved and to get real server and client implementations.

The reason is that you need people that are very close to specific domain problems. An engineer in Mumbai that is working on a glucometer for medical device start up needs to be able to easily find or create a FHIR profile and be able to discuss it online without finding the equivalent of 3 months salary to come to WGM in order to find out they will need to come to six meetings and lobby like crazy before they even get a shot at contributing to the standard.

One of the things I noticed at the HL7 working group is how daunted a lot of the people were with even the current full tool set get going with FHIR. You have to install subversion and have the right version of Java. All of this stuff comes with a whole learning curve associated with it. Right now I can’t even really locate where the tools can be downloaded from the FHIR site.

I believe it would be possible to come up with a much more light weight loosely coupled approached which is not centrally controlled, gives visibility, lowers the cost of collaboration on standardizing the flow of data that can be done with a few simple HTTP based protocols that can be implemented with any technology stack.

This has be much larger than just HL7 – HL7 can be part of it but this has to be much larger.


FHIR is being extremely open. We’ve got a wiki page behind every spec page. We have a list server (instructions to connect on wiki). We’re starting a Stack Overflow tag. We have a group Skype chat for implementers (instructions to sign up on wiki). We have public minutes of all governance meetings on the wiki. We have connectathons, open conference calls and open meetings during the Working Group. We regularly hear from and answer questions from implementers all over the world.

So I’m not sure what you feel we’re sugar coating nor what community you feel we’re being inaccessable to.

You can’t find the tools for developing FHIR resource designs because you haven’t been identified by one of the WGs developing FHIR resources as one of their committers. (Though if you dug a bit on the wiki, you’d find them without too much trouble.) Creating resource designs is not something that’s currently in scope for implementers to do. There should be no need for any implementer to access SVN right now – they *can*, but it’s not a requirement for them to implement FHIR in any way.

We are considering allowing resources to be defined directly by the implementer community (partly outside the conformance framework), but if we do so, want to ensure we don’t introduce the problems associated with HL7 v2 Z-segments. Specifically, we don’t want there to be collisions between same-named resources and we also want to ensure that anyone who receives an implementer-defined resource will have a means of discovering the meaning of the contents of the resource in an automated fashion. Once that’s figured out, we’ll look at opening up resource development tooling to the outside community. We’ll also need to get the tooling up to the point where it can be more easily used without as much support as we currently provide HL7 WG members who are doing development.

Remember, while free, FHIR is still an HL7 standard. If you want something where there’s no central management what-so-ever, then you don’t want FHIR. FHIR is subject to ballot processes, controls on what is part of the standard and what isn’t, rules around what constitutes conformance, etc. We are working with many external organizations – IHE, DICOM, CIMI, ONC and others. We’re certainly not being insular or isolating ourselves from external opinions – particularly those of implementers.

FHIR is not a free-for-all. It allows an enormous amount of flexibility, but it may not be flexible enough for your desires.

This thing has to be driven by a core of end users that have a fire in the belly. If not, then it just becomes another vendor dunsel. You cannot over-emphasize the importance of the KISS principle. That was the KISS of death for v3. The foundation is right, but I have 4 other projects I’m working on today. If it’s not easy for me to look at and understand, it’ll just be another Cat picture on Facebook.

I think you have nailed it. There are legitimate purposes to Healthcare Technology, but it’s unnecessarily complex and made even more complex by the tangled mess of billing and government regs. Add to that the fact that billing rules are growing in complexity to the extent that when payers save a dollar, it costs 2 just to put all of the rules and business process to keep track of it all. If bread were sold in the healthcare market, you would have to prove that your kids are hungry and would need to provide check stubs to prove it.
To be blunt, v3 won’t work. There were a couple of things that actually addressed needs, such as CDA. As for the schema, it looks like a gigantic mess of spaghetti. My initial impression of FHIR was that it was a bottom-up initiative. To that end, it can serve a useful purpose, but ONLY IF it has the discipline of the end users in charge. If Cerner and et. al. start to heavily influence it, then we have another flash in the pan.

I think that the concept is interesting, but the implementation can kill it. As for conformance, right now we are in a place that if it works between the HIS and the RIS, I’ll take it. Conformance would be nice, but if you compare “Conformance” to “Working”, I’ll take “Working”

In that FHIR is a clean sheet, and it reaches to the Architecture, it could be a good thing. I also realize that v3 started that way as well.


Eliot, depending on what you mean by “to get going with FHIR”, all you need is to read the page called Implementation (if you want to use the c# or java api), and read the page Formats and Http if you want to write your own client or server.

The java written publication tool is only needed by those who wish to author resources themselves. We use subversion to maintain the materials for the HL7 controlled resources, but that’s simply an industry standard.

If we can improve upon this, we’ll gladly do so, but that’s already a pretty minial toolset, they are all widely available and they are all free.

@Bob – kind of funny bit of trivia but when I mentioned this whole idea in a ITS there was a guy from Cerner who got it. He likened it to Bitcoin for healthcare standards. We do business with Cerner – it’s a big company – they have smart people that want to do the right thing. Just like HL7 itself is a big organization that does have a lot of people that want to do the right thing. The institutional structure of HL7 was designed at a time when healthcare IT was a lot simpler – it just wasn’t built to handle the diverse complexity of the modern healthcare IT marketplace. It fails for the same reasons that central planning of economies fails.

Funnily enough I was pretty impressed with the how the guys from Cerner at HL7 grok it. At the end of the day those guys are implementors which means they know what it takes to maintain real production systems and why the industry needs Z segments and what make V2 a successful standard. They live in the real world.

I think the whole crux of the problem is that healthcare IT has become so vast, rich and complicated that it simply doesn’t work anymore to have the old centrally planned approach to trying to architect standards. The way HL7 works today was groundbreaking in the 90′s – but the problems we faced and the breadth of data was so much less then than it is today.

No one on the planet is smart enough to really grok the entirety of everything which goes on in healthcare IT. We need new much more lightweight ways for people who want to share data to collaborate together. Web APIs need to be created quickly, allowed to fail fast or succeed. What wins and gains prominence needs to do so not on the political weight you have in the standards committees but on technical *merit* and usefulness – what actually gets implemented. Shocking concept! What data do people actually want to exchange – what do they need to share?

It’s so crazy how often this comes up – just while I was at the airport coming back from the airport the question came through from new product group we are doing business with in GE about where the heck do you put ‘Dose’ in the HL7 2.X standard. There is no clear place they could find so they have been using the same old common data bag the OBX segment just like everybody else does. GE has a sizeable contingent of very strong contributors to HL7 – but the communication process just doesn’t scale – too many communication paths, it’s too difficult to collaborate.

As a result out in industry as much as people would like to implement standards, it’s next to impossible – the standards are simply not there. So people have to do all sorts of random adhoc ways to solve the real problems they have to do to make things work. Clearly from what we can do with the internet and social networking, better, faster more modern ways exist to collaborate.

Interesting discussion, i would like to point you to a few works that may help in stimulating the discussion:


Where we wrapped different sets of APIs into a single end user programming language leveraging some introspection capability of the devices.

you could also have a look at this:


Where we proposed a template as “meta description” of a service that could be used by cloud service engineers for combine multiple services.

Finally you could look at this:


Where we implemented a uniform instrument model and a related set of services for integrating multiple devices

Add a Response

Leave a Reply