Is it so wrong to have a favourite statement in a programming language? Probably. However, I am not embarrassed to admit that in QlikView mine is ApplyMap.
On face value it is a humble little function, which I tend to explain to the un-initiated as being “a bit like a VLookup in Excel”. This however is to do it a great disservice.
So, why do I hold ApplyMap in such great affection?
Basically it helps make light work of fixing problems in the data model that can otherwise lead to serious performance issues. Let me demonstrate.
Whilst this post references QlikView, ApplyMap is just as useful in Qlik Sense. All of the principles and code snippets can be applied to Sense, you just need to note that the method of referencing flat files has changed slightly.
Let us start with the assertion that too many tables in your data model is a bad thing (a topic for another article). So, you don’t want a situation where you have many lookups joined to your fact table. This can be resolved by creating a mapping table and switching out the ID with the value using ApplyMap. In its simplest form you just create a mapping table and give it a name, this name is then used in the ApplyMap statement – which then swaps the ID for the value. The basic syntax looks like this:
Simple enough. Just a couple of quick points after the first example:
- The mapping table must have only two columns
- The first column is always the ID and second column always the value
- The column names in the mapping table are not relevant
- The mapping table only exists during load
- No associations are made by column name
- Getting the table name right is essential
- No error is raised if you use a mapping table name that doesn’t exist
- It is easy to miss the MAPPING prefix on the load – check this before reload!
Right, now we have the basics down we can look further.
The source from the mapping table can be database, spreadsheet, resident table, QVD (requires a workaround in v9 and below) or as in this example an inline table:
This is great for very simple fixed replacements, such as swapping Y / N with Yes / No. A single mapping table can be used many times in a script – which can be useful for these sort of applications.
If you are of a SQL background you may now be thinking that you could resolve lookups like these with a JOIN in SQL – maybe a view on the SQL Server that resolves these links pre-load? If so, let me stop you there. This approach is simple but can drastically increase load times as the full description will be sent from the database to QlikView many times (once per row). With ApplyMap the much shorter ID is sent many times and is switched out for the longer description during the QlikView load. Your network and users will thank you for not building a view up front!
Now, where ApplyMap starts to set itself apart is in its third (optional) parameter. If omitted the function returns the ID – this is great for debugging, but can produce field values that look untidy. The third parameter allows you to return something more useful, such as a fixed string like Unknown:
Alternatively you could opt for a different field from the same table instead:
Or build something using a mixture of both:
With the third parameter ApplyMap has already exceeded what can be done with a join. I shall revisit the third parameter later with something which I think really sets ApplyMap apart from the rest of the field.
The third parameter (or its omission) neatly deals with the problems that can arise if your lookup table is missing rows. The missing rows can then easily be identified and fixed at source. But what if there are duplicates? With a SQL JOIN statement this can be disastrous – as a duplicate row is caused in your fact table often leading to double counting. With ApplyMap the first instance of the key in the mapping table is used and the subsequent duplicate values are ignored – avoiding any duplication of fact data. If you wish to identify duplicates in your lookup tables this can be done in a separate QlikView document – or another tool if you wish. Rest assured that with ApplyMap duplicates will not cause you to get incorrect numbers.
As with associations between tables in QlikView, mappings can only be done on a single key field. This can be dealt with elegantly by building a composite key on both the mapping table and the table being mapped to:
The good thing here is that if a long concatenation of fields is required to get a unique key it does not take up valuable space in memory in the final document as the field is dropped once it has been used for the mapping.
If multiple fields are required from a table then you may consider joining. Alternatively you can create multiple maps from the same table. This will be more efficient at run time, but requires more code in the load script which can become unweildy. I tend to draw the line at around four or five fields from a single table for this approach:
Be aware that as soon as there is a one to many situation between tables (ie. not a straight lookup) then a separate table must be loaded into the data model – rather than using a mapping table.
The parameters of the ApplyMap statement can be fixed strings, other functions and fields from the row being loaded, this example has a mix of those things:
When building mapping tables you are sometimes presented with getting information from tables that themselves have ID’s in. These can be dealt with by using an ApplyMap statement in the build of the mapping table, like this:
Various permutations of using one map within the creation of another are possible. All the techniques discussed above can also be layered on top of other mapping tables – perhaps a composite key for one mapping table built from a field value and the output from another ApplyMap statement?
Another way that mapping tables can be combined is by using an ApplyMap statement as the one of the parameters of another ApplyMap statement. In this example if nothing is found when looking up on the first field then a different map on another field is tried:
As you can see one ApplyMap can be nested happily inside another one. This nesting can go to as many levels as you require.
In it’s simplest form ApplyMap can serve a very important purpose with very little code required. By nesting maps, combining with other functions and concatenating; complex manipulations can be done on data load that save large amounts of memory and processor time at run time – when it is needed most.
This little statement has made my life easier on countless occasions – I don’t think it is too wrong to call it my favorite?
Hi Steve,
Excellent post, I never realized that there was so much to say about the the ApplyMap function ;)
If I had to pick a favourite function, it would probably be AutoNumber.
Cheers,
Barry
Hi Steve,
Great post. ApplyMap is a lot more versatile than I realised :)
Can’t say I have a favourite function (at this time).
Greetings,
Mark
Well done Steve!
Cheers Steve… once again you’ve helped me out…
PS yes
Excellent post Steve. Opened new avenues to use ApplyMap function.
Many thanks – DV
Great article, thanks Steve.
As someone relatively new to QlikView this is invaluable.
Valuable information helped me a lot. Thanks you for sharing your knowledge
Good one. Basics are taught in much better way.
really clear explanation! thank you!
Hi Steve,
This is Sateesh i am very new to qlikview .Could you pls tell what is the exact difference between ApplyMap and Mapping Load and which situation i can use this two Functions.
Regards,
Sateesh
Hi Sateesh – you need to load your data with a MAPPING LOAD in order to then do an ApplyMap, the two things don’t do anything independently. ApplyMap trends to be quicker than most mechanisms for joining data – but you can always benchmark if you need to be sure.
Great article, Steve! Apply map is also my favorite. This is probably single most detailed article I have found on apply map. Great work!
Thanks for your comment Shilpan – and for your great work over at http://www.learnqlickview.com
[…] ApplyMap — Is it So Wrong? via QlikIntelligence […]
Hi Steve, Greate artical and explained it very well. I have a question around applymap function.
Do we always need to use distinct while loading data for mapping table? I have checked that if a key has multiple values then the first occurance of that key will get applied when using applymap function. I think the first occurance is decided by load order of mapping table.
Please correct me if I am wrong.
Thanks,
Anosh Nathaniel
What you say there is absolutely right. It therefore makes the DISTINCT statement redundant, as for a row to be excluded by the distinct it must be identical to another row – so therefore it wouldn’t matter which of the multiple rows it picked. If your source data is clean however, you probably shouldn’t be hitting these issues?
Great article, Steve, apply map is very useful.
I have a mapping table like this :
MyMap :
MAPPING LOAD * INLINE {
X,Y
1, ‘ONE’
2,’TWO’
3, ‘THREE’
};
ApplyMap(‘MyMap ‘, num ) AS myNum return ONE if num equals 1 and so on.
how can i use the same map table to get ‘x’ given ‘y’ ?
thnx
Hi there,
The obvious answer is to find a file with all numbers in up as high as you can go. A quick Google shows this isn’t as easy as you might expect. In a previous life I wrote an SQL function to convert an integer to words, by breaking it down into parts with a substring and building the English up a bit at a time. That function could return a list from One to however high – if I still had access to it. I tried getting Excel to do it, you would have thought this would be a useful thing for it to do, but no joy.
Anyone else have any suggestions?
Hi Steve,
Thanks for the article, it’s very informative. Can you tell me if it is possible to use apply map to replace only part of a string.
I am trying to cleanse address data and want to replace rd with road, st with street etc however I obviously don’t want to replace any other rd or st that may occur in the address prior to the last word.
so Stevens St would become Stevens Street not Streetevens Street !
Hope this makes sense.
Many thanks.
Hi Ben,
You may be able to use ApplyMap as part of the solution – but address cleansing tends to be a bit of a tangle of nested Replace statements, you probably want something like:
if(lower(right(Address, 3)) = ‘ st’, mid(Address, 1, len(Address) – 2) & ‘Street’, Address) as Address,
That will do Street for you, but if you want to do the same for Rd and Road you would need to nest that in another if looking for ‘ rd’ at the end of the string.
Enjoy!
You can try MapSubstring. It’s similar to ApplyMap but works on parts of a string. I know this is not useful after 3 years but maybe good to know about this. :)
Why has no one told me about this function before! I’m not sure I can see many use cases for it, but it is good to know it exists. It’s in the new Qlik help files, here: http://help.qlik.com/en-US/qlikview/12.1/Subsystems/Client/Content/Scripting/MappingFunctions/MapSubstring.htm . Thanks for making me aware of this Goutham.
Hey Steve :)
You’ll be surprised to know that MapSubstring is a powerful function and has many use cases. It can be used for finding a keyword in a string. For e.g. a group of Advertisement keywords that occur in a column. It doesn’t do a Case insensitive search like wildmatch. But this could be overcome by using the Upper keyword in the target column. My only gripe is that it doesn’t have an “else” clause like ApplyMap does. If you want an example, please share an e-mail so that I can share a qvw.
I have a table with fields that return values as 1,2,3,4,5,6,7, and I need to group the values 1,2,3 as’ new name ‘and 4,5,6 and 7 with’ other name ‘, as I do?
Hi,
You can do this by having your mapping table having the same values for multiple keys. For just seven keys you could do it with an inline table:
Map_ValueToName:
LOAD
ID,
Name
INLINE [
ID,Name
1,New name
2,New name
3,New name
4,Other name
5,Other name
6,Other name
7,Other name
];
If you have more values then you could store them in a spreadsheet and load them in. If the requirement is as simple as <= 3 is one name and > 3 is another then I would simply do an inline if statement. ApplyMap does have more flexibility though.
Hope that helps!
Thanks!!
One more question. How can apply in my code?
LOAD * FROM
[C:\Users\Desktop\Microdados\funcoes\Matriculas_2012_PR.csv]
(txt, codepage is 1252, embedded labels, delimiter is ‘;’, msq);
I am newbie in this area
To use ApplyMap, as in the above examples, you will need to replace the * with a list of fields in the CSV. It’s generally a good idea to do this anyway. Once you have done this you can add a line after the ID field with the ApplyMap statement on it.
Hope that helps.
Hi Steve. When I use mapping inline, the map doesn’t automatically update from the source when doing a reload. We need to rename fields as in a source it is sometimes just strings on numbers that doesn’t mean anything. Can you let me know if there is a way of updating mapping table inline from the source please?
Hi,
The mapping table should be read every time you reload. The only way it wouldn’t is if you are doing a partial reload. I never use a mapping table for renaming columns, although I know this is a feature that is available. To do column renames I always use an AS statement in the load script. Hope that helps.
Hi Steve,
Very good post!!!
Happy Qliking..
– Yojas
Steve,
Great article, but just need to clarify… In addition to using applymap(), you are still loading the base dimension tables upstream in script, correct? Ex. Product: , Category: , OrderType: , etc. If not, seems this use of Applymap() would be circumventing an otherwise (good) star schema in the QV data model, by just denormalizing dimensional strings into the facts. If, however, you are suggesting that Applymap() is used to denormalize strings into the facts for non-conforming dimensions only, then a different use – I.E. simplification of data model only for non-star values.
Pulling fields into the main fact table so that there is no need to load other tables to form a star schema is exactly what I am suggesting. Each of the joins between tables in your data model needs to be resolved for every expression where fields from more than one table are used in the expression or are dimensions. QlikView does this lightening fast – but if you can bring fields into the main fact table so it doesn’t need to do that ‘hop’ between tables then it will be even more performant. For satellite tables with many dimensions it is probably worth loading them separately – when there are numeric values with different granularity it is essential. However, having a goal of reducing the number of tables in your QlikView data model is a good thing.
Steve,
Ok, I like the performance gain. However, how would you accomplish slowly changing dimensions (SCDs), without loading actual Dimension keys into the facts?
Slowly changing dimensions is quite a big topic, but in a nutshell what you may chose to do is ApplyMap in the current value for a field when you first load a record. Using an incremental strategy that value would remain unchanged. If you also carried the ID and the dimensions attached to that ID changed, the dimension value in the joined table would show you the current state whilst the values stamped on the old records would be static – so you could see where there had been changes between the then and the now.
Right, but you would have to denormalize all dimension attributes into all fact(s) during each load. If dimensions are loaded as separate tables, you would not have to. I.E. If attributes are added to dimensions over time, having separate dim tables allows you to maintain them in one place. Just trying to follow scenarios (in other BI tools) that I have encountered.
That is correct. It all comes down to whether your application performs well enough or not. It’s similar to deciding whether you have a warehouse style table in your database where you store denormalised data or whether you always go to the production tables. It’s different in every case and you have to make a call. Rob Wunderlich said at a recent Masters Summit that if performance is not causing pain why go to the effort of optimising everything – whilst I don’t entirely agree I can certainly see where he is coming from.
Thanks Steve. I need to get my head around some of these techniques, that I took for granted using OBIEE and a conventional star schema design. I really like QV.
Thanks Steve for this very good article. But I have some different understanding. I am appreciated to correct my wrong.
Yes, pull value into fact table to replace ID is very easy for us to use it. However for huge size fact table, I would prefer to keep short ID to join to dimension table. Because pull value in huge fact will cost lots of RAM/space. Especially when the value is long. And in another hand, I think applymap too many times will harm reload performance compare with associate join. e.g. your customer name and town example.
In the end, I still have to admit applymap is one of the most important function.
Hi Wenkun, the crucial thing to remember is that QlikView only stores one copy of each unique value in every column in the data model. In this respect it is almost like every field is it’s own normalised table that joins into a core table on an efficient key. This is always difficult for people that have a database background to get their head around! What is important is that you bring in the small key from your database and ApplyMap in QlikView, if you do the JOIN in SQL then you will bring the same value many many times and that will kill your bandwidth.
There could well be a reload hit on using an ApplyMap over having an associated table in the data model – but as you load once and then query often the performance in the front end is the more critical thing to watch. If reload performance is becoming a bottle neck then incremental loads are probably the solution. JOINs in QlikView load scripts can have un-desired results (such as duplicating rows in the original table) if you are not careful about your join.
Let us start with the assertion that too many tables in your data model is a bad thing (a topic for another article).
Did you ever do this article. And are your view the same as Henric
http://community.qlik.com/blogs/qlikviewdesignblog/2012/09/12/to-join-or-not-to-join
I tend to prefer keeping tables separate for the reasons Henric gave. But other I know have strong opposite views.
Thanks
There is (as Henric states) no reason not to have tables in your data model. Lookup tables with a single ID and value however make very little sense – and do add some overhead to calculations. Is always a case of considering the options and going for the best fit. Ease of reading and logical layout of tables is a valid reason not to map all tables together.
Thanks for your reply. I just trying to reduce the size of my document. And keep the table details within reason. But I initially had over 30 tables then reduced it down to 15 (by mainly using applymap). Based on a belief that too many tables were not good. But I might take a few out (applymaps) like employee details to see how it reduces the file size (50 employees but 350,000 rows).
Having an employee details table doesn’t sound like a bad thing. Whether the values sit in an associated table or in the fact table won’t make too much difference though (due to how QlikView stores duplicate values in tables). You will get a saving using ApplyMap if you can avoid carrying the join key (EmpID, for instance) into the data model – as it won’t be needed for associations.
Bigger savings on document sizes can be gained by dropping irrelevant fields or decreasing the number of unique values in a column. Ensuring numbers are rounded to a sensible number of decimal places can make a huge difference.
Good blog Steve, a comprehensive coverage of ApplyMap.
Mapping tables & ApplyMap was one function that I didn’t really appreciate the power of, in my early days with QlikView, but is now an essential tool for building efficient data models.
Great work.
Hi Steve, Great Article. Could you please help me with a problem if I can mapping function?
(I have 3 variables loading from different QVDs, I was looking to match them with a typical requirement)
Requesting for expert’s comment on: http://community.qlik.com/message/556375?et=watches.email.thread
Many Thanks and Regards
– Deep
Hi Deep,
I’ve provided some thoughts over on your Community post. Hope they are useful. Thanks for your comments on the blog.
Steve
Dear Steve,
First a big thank you for the great posts here and on qliktech as well. I am relatively new to qlikview, I am buliding up my first Data Model and application at my company.
Let me address this sentence in the above post:
‘Be aware that as soon as there is a one to many situation between tables (ie. not a straight lookup) then a separate table must be loaded into the data model – rather than using a mapping table.’
I have this situation: I have a big fact table (Invoicelines) and another smaller one with further transactional data (Fees). I need to fetch some values from Fees into Invoice lines, but in 5% of the cases there are more than one corresponding rows in the Fees table, with the same TransactionID (I need both). There is also RecID in every record which is unique throuout the system. Join is obviously not a solution for me due to duplication.
So my theoretical solution: I use a few ApplyMaps to Map the data I need from Fees into Invoicelines, including the unique RecID. Then I Load the Fees table (from QVD) with ‘Where Not Exists’ to load only the 2nd records where the RecId and values have not been formerly mapped into Invoicelines. And then I ApllyMap values from these records into Invoicelines.
What is your opinion about it? Or perhaps you have a better solution?
Hi there, thanks for your comment.
Unfortunately, the 5% where there is a duplicate of the key in the Fees table means you need to load it in as a separate table. The only other option is to do a GROUP BY and a SUM when you build your mapping table, to ensure there is a one to one relationship. It’s just a question of whether you need to report on the two different fees separately or not?
Steve
Thanks for your reply. Yes I need both fees separately, and also some descriptive data from those records. That is why they exist separately, so GROUP BY will not work for me.
I realized that simple table association will work ok (no duplication), but I do not like this solution.
What’s your take on my theoretical solution?
Or solve this rather on SQL side, there are some hints about that on some forums.
Enikö
Excellent explanation. Thank you very much in providing with all possible scenario’s with ApplyMap Function. Can i know if you have written on any other functions similarway please.
Hi – thanks for the comment. I post tutorials on many topics on this blog. I also contribute to Qlik Community so you can find other solutions of mine there. If you subscribe to the blog you can be notified of new posts and tutorials.
Thanks Steve. It’s great. I just want to clarify my understanding with you. If I am wrong please correct my concept.
My understanding is in Apply Map function we use the mapping table which created before. In mapping table there must be only two fields one is the ID another one is the description, ID field should be the first field.
When we use the apply map function in a different table we just call the Mapping table through the apply map function and use the ID field, in the second table there must be a field which contain the value same as ID say this field is X. Then the apply map function just match the value between ID and X which value are matching,apply map create a new column as per the mention in the function and put the value of the mapping table’s field Description.
That means finally we get X wise Description in the second table.
Is this a correct concept ? Please inform me. Thanking You.
That is exactly correct. Many thanks for your comment.
I have found that for flattening out a snowflaked dimension, applymap is great. Other than that I have not found a use for it. With the unique way that I create star schemas in Qlik, I can do everything without them, with high performance, and a far more extendable and easier to maintain data model.
Hi David. I have found that where there are seriously large amounts of data fewer hops does improve performance. Particularly when you have a one-for-one swap of an otherwise redundant ID for a usable dimension. Bringing more information into a single table also means that QVDs can be created for use by people who have no concept of relational tables, they just want one QVD with everything in it. This approach is not always practical or sensible though – so I am not adverse to a well planned out star schema when appropriate.
Thanks Steve. I guess since most of my data models involve several correlated fact tables, I always model in a star, and it’s very easy to show and explain to power users a “QlikView Data Mart” that they can do binary loads with. New facts and dimensions can be added with ease. I have not had a problem with tens of millions of rows, but have never modeled for above that threshold. In my opinion, QVW’s with more than tens of millions of rows should first have a summary data model “data mart” built, and then use document chaining to “show details” from another more detailed grain QVW if needed.
Here you have a llitle workaround to map more than one field in the same Mapping Load using Subfield() function.
http://community.qlik.com/docs/DOC-7609
Regards
Hi Mike. Thanks for sharing.
Hi Steve, hoping you can help with little mapping issue I have:
I’m trying to pull two fields from 2 different tables into my main table, I’ve used your guidance above successfully with one of them.
However on the next one I have an issue with fields names (the keys) as they are not the same in each table and I’ve had to rename one of them to join the tables etc.
I trying to bring a field called “OnSite” from the Buildings table into the Equipment table, as I need this field to enable me to perform various calculations and create new fields in the Equipment table, for example I need to calculate the cost of equipment but this could be a different result depending if the equipment is in an on-site or off-site building!
In my building QVD load I have the fields :
[Building Code],
[On Site?],
in the Equipment QVD load, the relevant field here is:
[EQ Building Code] as [Building Code]’
My mapping code is as follows (this is on a separate tab, likewise with data from each source QVD):
map_OnSite:
Mapping LOAD
[Building Code]
[On Site?];
LOAD
[Building Code],
[On Site?]
FROM Y:\Qlikview_Archibus\QVD\Buildings.qvd (qvd);
Back in the Equipment script i have this as the apply map statement:
ApplyMap(‘Map_OnSite’, [Building Code]) as [EQ OnSite
I think there’s an issue due to me having to rename the EQ Building Code field to Building Code in the script for the load of the equipment data.
Can you see anything I’m doing wrong?
Thanks
Dan
Hi Dan,
You don’t need to have the preceding load in the map_OnSite table, you can load direct from the QVD. Also, the names of the fields in the Mapping table are not relevant, only their position (first field, second field) so you may have a superfluous rename in there.
I think the problem is most likely the fact that you have a lower case ‘map’ when you create the mapping table and capitalised ‘Map’ when you use it.
Steve
Hi Steve,
Great article. I have one doubt.
In applymay, what if I want to return my 2nd column from mapping table. For example: I have mapping table Employee (Emp_id, name) and other table with juts emp_ids. I want to return employee_names for given emp_id. I dont know how to write code to find employee_name. Emp_id I can fetch. Pls let me know. Appreciate your help.
Hi Akanksha,
Thanks for your comment. If I follow your question you could use a where exists on the MAPPING load, something like;
Temp_Emp:
LOAD
Emp_id
FROM JustEmpIds.qvd (qvd);
Map_EmployeeName:
MAPPING LOAD
Emp_id,
name
FROM EmpCodeAndName.qvd (qvd)
WHERE EXISTS (Emp_id);
DROP TABLE Temp_Emp;
This way your mapping table will only have employee names in it where the employee id exists.
However, unless you have serious numbers of employees (hundreds of thousands plus) in the full table, and only a small percentage of that in the Emp_id only table, I can’t see that the optimisation there will save you much time. In fact, the additional steps may even hurt performance.
Hope I understood your use case correctly?
Hey Steve! Thank you so much. This solution helps.
i have 2 excel sheets, i want to map the rows in sheet2 to sheet1, but i want the output as sheet1 column.
Hi – you basically have three options for bringing the data from two sources together as one. ApplyMap (described above), JOIN and CONCATENATE. The best one to use depends entirely on the data. You don’t describe the data you have or the problems you are hitting. Basically, ApplyMap is for simple lookups – turning keys to values. JOIN is for bringing two tables with multiple rows together where there is a key field between them. CONCATENATE is for bringing two fact tables together where there are multiple columns that are common – but not with the same values in. You need to select the right one for things to work.
Hi Steve, great post… One small question. I have one table with limited set of fields but has all the primary keys and I want to bring in about 17 more fields i this table from a different table which has a a huge amount of data. I am currently using the JOIN function. Since Mapping Load can only have 2 fields at a time, do you think it will still make more sense to use ApplyMap as I will have to create around 17 mapping loads???
Hi Vinny. It sounds from what you describe that keeping the tables separate and just having them associate in the data model may be the best bet. That many ApplyMap statements may start the load script to lag. The only way to really know though is to benchmark different approaches.
Cool !! Thanks Steve… I will try the 2 approaches and see what works best from the performance point of view.
Excellent post. I can’t seem to use a mapped field, though, as I’d like to.
I have successfully mapped in a field from another table, but now I want to create a new field using the newly mapped-in field and an existing field. Note in the code snippet below that I’m trying to create a concatenated field called PeriodStartResMgr
LOAD
ResourceUID& ‘_’& TaskUID As _KeyResTaskLink,
ResourceUID,
ResourceName,
PeriodStartDate,
ApplyMap(‘Mapping_ResourceManager’, ResourceUID, ‘Unmapped; ‘ & ResourceUID) AS ResourceManager,
PeriodStartDate & ResourceManager as PeriodStartResMgr,
ProjectName As TSProjectName,
ProjectUID, …
I get the following error in the load:
Field not found –
SQL SELECT
ResourceUID,
PeriodStatus,
TimesheetUID,
TimesheetStatus,
ResourceName,
ProjectName,
ProjectUID,
TaskName,
TaskUID,
TimesheetLineUID,
PlannedWork,
CreatedDate,
PeriodEndDate,
PeriodStartDate,
TimesheetLineStatus,
ActualWorkBillable
From ProjectServer_Reporting.dbo.MSP_TimesheetLine_Userview
WHERE (PeriodStartDate BETWEEN ‘2014-06-28 00:00:00’ AND Sysdatetime())
AND ProjectName ‘Administrative’
It appears that ResourceManager field is not ‘available’ yet to be used in the line
‘PeriodStartDate & ResourceManager as PeriodStartResMgr,’
Is that right?
Hi Tim. What you need to use here is a preceding load. Using one of these you can use a field derived at one level of the load (perhaps with ApplyMap) in an expression at the next level of the load. If you do a quick search you will find that I have blogged on preceding loads – and that post should make everything clear.
Thank you so much for your post
It is very useful to get a key by building a composite key ,
but I had a problem in this case,Could you like to give some advise?
LOAD
A,
B,
A & ‘:’ & B as C
INLINE [
A,B
101, 121
103, 001
]
The result is
A B C
101 121 101:121
103 001 101:121 There should be (103:001) as I thought
The result of 103 is wrong,but when I change ‘:’ to other symbol like ‘_’ the result is right, Is there any wrong with my code?
Thank you again
Best regards
Hi Yumi,
This is a weird one, and I can replicate it in my version of QlikView also. The issue does seem to be with the : symbol being used in the key. I have seen this once previously, with a specific data set. The solution was (as you have found) to change the character being used for the concatenation. I’ve played about with it a bit, and interestingly if you change the order of the inline rows the single key is the correct key for the other pair. Adding another row to the inline causes a correct key to be generated for that row.
If you want to find some suggestions for why this might occur I would suggest posting to Qlik Community in the first instance, and then if no staffers pick it up, pass on to Qlik support.
Please post back here if you discover anything interesting!
Hi Steve
thank you for replying me so fast,
I will post it on Qlik Community as your suggestions,
I hope someone can help me to figure it out,
And I will post back if I find anything,
Thank you again!
Best regards(o^^o)
I am using this code,
seems mapping is not working, even I use Applymap function.
test1:
Load *
inline [
OCode,
S,
P,
I,
];
QCTypemap:
Mapping Load *
inline [
TypeCode, Description
S,SalesQC
P,PurchaseQC
I,InventoryQC
];
testN:
load
//ApplyMap(‘QCTypemap’,[test1.OCode]) as NCode from test1
OCode as NCode from test1
;
//Map Code using QCTypemap ;
//Appl(‘QCTypemap’,Code,others) ;
You can do it in just two statements, so this should work for you:
QCTypemap:
Mapping Load *
inline [
TypeCode, Description
S,SalesQC
P,PurchaseQC
I,InventoryQC
];
test1:
Load
OCode,
ApplyMap(‘QCTypemap’, OCode, ‘Missing’) as NCode
inline [
OCode
S
P
I
X
];
The issue you had with your code is that where you were trying to load from your test1 table you were still using FROM. This should have read:
LOAD
OCode,
ApplyMap(‘QCTypemap’, OCode, ‘Missing’) as NCode
RESIDENT test1;
However, the simpler approach of is better, as the RESIDENT load uses more resources.
Great article!
And yes – I do forget about MAPPING statement before mapping table load :)
[…] Quick Intelligence – QlikView ApplyMap – Is It So Wrong? […]
Hi Steve. Great work :) . ApplyMap can also be used in a where clause.
For e.g.
from Dummy.xls
where ApplyMap(‘Map_1’, field1, ‘Insert_Else_Clause’) ‘Insert_Else_Clause’
This is useful when we need to have multiple conditions in a Where clause.
Yes, like with lots of things in QlikView, the ApplyMap function is as flexible as you could possibly want. I’ve used it in the WHERE clause before and it works well. Thanks for sharing here though Goutham, as I neglected to put it in my article.
Hi Steve,
I am using a nested apply map to look up for some values.At the end of apply map I am using default ‘NA’ if there is no match. However for non matching records, it returns 0. What could be the reason for returning 0 and not NA
Is zero the value that you are mapping on to? If so, you are missing the NA in one of the nested ApplyMap statements. Depending how the statements are nested the second ApplyMap may be the third parameter of the first (i.e. ApplyMap(‘Map_A’, Field, ApplyMap(‘Map_B’, Field, ‘NA’)) ), but if the second ApplyMap is the second parameter of the first you will need NA twice (i.e. ApplyMap(‘Map_A’, ApplyMap(‘Map_B’, Field, ‘NA’), ‘NA’) ).
Hope that makes sense?
Hi Steve..Thanks for the nice post
I would like to know when we do Self Join in SQL as in We are joining Table 2 with Table 1 and on the basis of some condition if we have to create multiple columns based on values in one of the Table 2 column, can we utilize ApplyMap function?
In order to achieve what you get from a SQL join you will need a number of separate applymap statements. There are still cases I would go with applymap in these cases.
Ok Thanks. So if there is a case where we need to self join for 9 times and same can be achieved by using separate ApplyMap statement for 18 times still I should go for ApplyMap?
For that many fields I would most likely go for loading the tables separately and have them associate in the Qlik data model.
I need to map two fuel types FIELD from 2 tables, Sounding correction and Summary depending on the sulphur content of the sounding correction table.How can I map these two tables using mapping function and apply map with conditional expression
the mapping and condition would be based on the following logic:
If (Sounding Correction.Fuel Type = ‘DM*’) and (Sounding Correction.Sulphur Content)
Greater than 0.1 , Summary.Fuel Type = ‘HSDO’
Or else ‘LSDO’
If (Sounding Correction.Fuel Type = ‘RM*****’) and (Sounding Correction.Sulphur Content)
Greater than 0.1 then (Summary.Fuel Type) = ‘HSFO’
Or else ‘ULSFO’
Hi Trishta. Unless I am missing something what you are wanting is not an ApplyMap, rather a nested IF statement. You’ve pretty much written the code for it there, you just need inline IF statements, so something like this:
If(left([Sounding Correction.Fuel Type], 2) = ‘DM’,
if([Sounding Correction.Sulphur Content] > 0.1 , ‘HSDO’, ‘LSDO’),
If (left([Sounding Correction.Fuel Type], 2) = ‘RM’,
if([Sounding Correction.Sulphur Content] > 0.1 , ‘HSFO’, ‘ULSFO’),
‘Other’)) as [Summary.Fuel Type],
If you have more than one table in play you may need to do a join or applymap first to get the data in a place where you can do the expression.
Hi Steve,
Great blog!
A sensitive issue I stumbled upon when using ApplyMap. The second parameter is actually the name of the column in the main table that will be used for lookup with the first column in the mapping table.
Qlik documentation specified using straight quotes to enclose it. This works fine when there are no spaces, but when space is present… surprise surprise… using quotes will never match anything as it is treated as an expression, not a column, but using square brackets will match the column name.
Sample code (not matching anything)
map:
mapping
load * inline [
key,value
0,a
1,b
2,c
];
main:
load * inline [
key id,something
0,ana
2,has
4,apples
];
final:
load
[key id],something,
ApplyMap(‘map’,’key id’,’not found’) as value
Resident main;
drop table main;
The final table will have “not found” for all rows!
If you use ApplyMap(‘map’,[key id],’not found’) as value
tadaaa… it works
I can only guess that Qlik is not clear what can be enclosed with quotes or square brackets. Some functions require column names enclosed in quotes, others in brackets.
regards,
andrei
Hi Andrei,
Thanks for your comment. In that statement it is only [key id] which is a field, and therefore the only thing that needs square brackets. The only other place square brackets are used is for table names in the load scripts. The other two parameters are strings to be passed in, hence single quotes. The third parameter will often be a string, or it can be looked up from a field – if this is the case then you would again need the square brackets. When it is a field as the third parameter it is the contents of the field, rather than the name that appears.
Hope that makes sense?