Episode 65: Data Sleuth® Standard Analyses 1

After the case plan is approved by a client, the financial investigator must then get to work processing and analyzing the financial data. Leah and Rachel begin by discussing how to process both standard and non-standard data in financial/fraud investigations for data analysis. Then, they discuss in detail the first of several Data Sleuth® analysis types: Comparative Analysis.

The information in today's podcast is just a glimpse of what's inside Leah's new book—Data Sleuth: Using Data in Forensic Accounting Engagements and Fraud Investigations—coming April 19, 2022. Preorder now on Amazon

Rachel Organist is the Data Analytics Manager at Workman Forensics. Originally trained as a geologist, Rachel uses her unique scientific reasoning expertise and analytical aptitude to undertake financial investigations. Read her full bio on the Workman Forensics team page.

PROMOTION MENTIONED IN TODAY’S EPISODE

Register at https://podcastpromo.datasleuthbook.com/ to win your free copy of Data Sleuth| Using Data in Forensic Accounting Engagements and Fraud Investigations

RESOURCES MENTIONED IN TODAY’S EPISODE


Preorder Leah's new book Data Sleuth on Amazon—coming April 19, 2022.

CONNECT WITH WORKMAN FORENSICS

Connect with Workman Forensics

Youtube: @WorkmanForensics

Facebook: @wforensics

Twitter: @wforensics

Instagram: @wforensics

LinkedIn: @workmanforensics

Subscribe and listen to this and more episodes of The Investigation Game onApple Podcasts,Android, or anywhere youlisten.

Transcript

Intro:

This episode is part three of our four-part series leading up to the launch of my new book. The Data Sleuth process I lay out in the book is what I wish I had had when I started working with forensics in 2010. Whether you're new to the industry, wondering where to start, or maybe even wrestling with how to scale a service that seems unscalable, I will leave the information in this book can help. The book is available now for pre-order. Pre-orders are what publishers use to determine how many books to order, so if you enjoy the content in today's episode, would you consider pre-ordering the book today? Stay tuned at the end of the show for more detail on the Data Sleuth book, or see the show notes to reserve your copy today.

Leah Wietholter:

Welcome to The Investigation Game Podcast. I'm your host, Leah Wietholter, CEO and founder of Workman Forensics in Tulsa, Oklahoma. Today I have with me one of the team members again. I have Rachel Organist. She's our senior data analyst. Originally trained as a geologist, Rachel obtained a bachelor of science from the University of St. Thomas in St. Paul, Minnesota, and a master of science from Penn State University. When her work in the oil and gas industry didn't provide the career satisfaction she was looking for, she researched other fields, and found forensic accounting to be the perfect place to apply her analytical skills. In her work with Workman Forensics, Rachel uses her expertise in scientific reasoning as well as her aptitude for identifying, collecting, and the synthesizing data to undertake financial investigations. As of 2021, Rachel is an official certified fraud examiner.

Leah Wietholter:

Well, welcome back to the podcast Rachel.

Rachel Organist:

So excited to be here.

Leah Wietholter:

Yeah, so just our listeners know it's like springtime in Oklahoma and so you just never know how your allergies are going to respond.

Rachel Organist:

I don't always sound like this if you listen to the previous 2 episodes.

Leah Wietholter:

But we're gonna like power through this episode episode 65 because one I'm excited about it and 2 it's like due so we just have to make it happen

Rachel Organist:

And probably the pollen's not going away anytime soon.

Leah Wietholter:

That is true. So today we're gonna. Talking about data sleuth standard analyses and different analysis that we run as part of our data sleuth engagements. But we're going to have to split this into 2 episodes because there's just too much to talk about and you and I are both way too excited about this topic. Soo that we can stick to our like 30ish minute format we're going to split it in 2 episodes. So today. We're going to talk about data processing and comparative analysis for sure and then we'll kind of see where we're at and but likely we'll save the other ones for episode 66 so are you ready.

Rachel Organist:

I'm ready.

Leah Wietholter:

First I'd like to talk about standard and non-standard data in fraud investigations and forensic accounting Engagements. So First What is the difference between standard and non-standard data in forensic accounting engagements and fraud investigations. In the way that we use it as part of this data sleuth process.

Rachel Organist:

Right? So first of all when you're talking about standard versus non-standard data where generally you're talking specifically about quantitative data. Both of those terms you're using to describe sources of quantitative data which I just want to. Throughout there because I think in the book. You also talk about some qualitative data that we use people who are familiar with kind of the broader world of data analysis might be thinking that we're talking about structured versus unstructured data which is ah you know, kind of sounds like some parallel terms. But. A little bit different than what we're talking about so just to kind of like get people on the same page here when we're talking about structure or standard I'm sorry standard and non-standard data. Both are quantitative data sources that we're going to use in our data analysis. But the standard data sets that we use are generally the same data is. Contained in that data source each time each case each you know, instance or example of that type of data. Another way to think about it is that it's probably going to have the same fields in the data set. Then it's also typically those data sources are going to be. I was trying to think of the best way to put this but ah, objectively or automatically prepared like bank statements credit card statements payroll reports the values that are in those data sources are just kind of like created as part of an automatic process if that makes sense like in that way, they're very reliable. Whereas non-standard data sources and not only could the fields included be a lot more variable depending on the case or the data source. But sometimes the preparation or the data entry might be subjective or it might be more prone to human error data entry error that kind of thing. Non-standard data sources might consist of paper records that we have to scan and digitize. So things like accounting records. We use the GL detail a lot specifically invoices ah exports from a purchasing system like po's time sheets. Exports from a point of sale system inventory records employeeing and hr records paper receipts. We've had we were just talking recently about a case with paper receipts paper calendar entries I didn't work on that one but I can think of a case where we use that. Audit or user logs which we talk a lot about the audit trailing Quickbooks but all kinds of you know point to sales systems and other things too would have similar user logs. Obviously that category just includes a ton of different things and they're very diverse depending on the case but it's pretty much. Anything outside of like our really common standard data sources that are the bank statements credit card statements payroll reports are really the big 3.

Leah Wietholter:

Yeah, the other day I was in a meeting with a nonprofit that I'm on the board and they were talking about a new program and they wanted to have the people who were implementing the program like keep a log of how many people they served right? Which is such a great idea. But as I'm sitting there I was like can I weigh in on how I think that should be done because they were just talking about like writing it on a piece of paper and scanning it in and I was like you realize that everything these service providers write down. Someone has to digitize them to actually create any value. At all. So I was like could we do maybe a Google form or something that would at least put it into an excel spreadsheet but still not be completely like difficult or um, whatever on the person filling it out. So anyway. Just was like thinking immediately because we've been talking about this and I was prepping for this podcast and I was like ah more nonstandard standard data sources like but ah, but even more so like that it's on paper like we can't analyze anything that's on paper without somehow digitizing it to get like a complete understanding of what we're working with so that kind of leads to the next question like how does a financial investigator harness the information and first in standard data sources. So how do we. Take standard data sources like you mentioned bank statements credit card statements or payroll reports are the 3 that I talk about in the book that are you can generally expect this type of information and I like what you said about it being objective information. It's not subjective like one month the bank's going to put some checks on there and some. Like some checks aren't like it's just an automatic process and a reporting of that automatic process. So like that a lot. But how do we as financial investigators actually make standard data sources usable in a financial investigation.

Rachel Organist:

First of all something that I was thinking about in preparing for this podcast is that man. We could talk about this for hours, especially we could get Donna on here and talk about this for hours. So we have a data processing specialist named Donna and she is incredible. She's my favorite person to work with I honestly don't know if she'll ever listen to those episodes I'm not just saying this.

Leah Wietholter:

Right? But she also would never be on this episode. So so we have to talk her up because she is amazing.

Rachel Organist:

I was thinking too that like a lot of people think that this data processing and cleaning step is like the boring required part before you get to the good fun stuff like the data analysis but seriously I get so much satisfaction from kind of working through these processes especially with Donna and optimizing our data processing and cleaning and it's just there's so much more to this than I think people think and it's really not boring. So that said and the other thing too is that I think this is what you and I were just mentioning the other day when we start talking to people about how we handle this part of the process. This is one of the biggest things that people are like okay wait just a minute. What did you say you use let me write that down like I feel like this is a pain point for so many people that anything that we've learned along the way people are always.

Leah Wietholter:

This but seriously on it sanized action from kind of working through these processs especially with domin and I might we've already processing Fla and um, it's just different much Mar is done than people think and it's really not boring so that that not looking to is that. And I were just mentioned that they when we start talking to field about how we handle this spreading process. This is one of the biggest things is that people are like okay wait just a minute what didn you saying to use son you write that down like I feel this is a pain point there. So many people forgive them only you arecomp with people. What peoplere excited we hear about.

Rachel Organist:

Really excited to hear about. If processing of standard data sources is needed to get it from Hopefully they're not and you know paper most of the time our standard data sources we can at minimum get a pdf, but if there's that extra step of processing that's needed to get them in kind of a tabular format. Like a Csv or an excel spreadsheet. Often we can automate that because the data that's contained in these sources is so standard from case to case and from you know bank to bank or whatever. So we use something called money thumb to process our banking credit card statements and Donna is the money thumb guru but that's a really good tool for taking those standard types of pdfs and turning them into a spreadsheet for things that are a little bit. Different from just a standard banker credit cardd statement but still kind of within that standard category. I don't remember if we've talked yet in the last couple episodes about idea but we will probably talk a lot about idea in this episode in the following one because that's kind of our workhorse data analytics software. And you may be familiar with it. If you are familiar with the audit side of things I know a lot of auditors also use idea for data analysis but idea has some data import tools and they have a pdf import tool. Um, so that's something that we'll use for um, ah, really commonly payroll reports if we get those in a. Ah, pdf format and then kind of a third way that sounds almost like too simple to be true, but a lot of times Donna has found that we can just copy and paste from pdfs if they're ocr and sometimes that works better than the idea import it just kind of depends on the layout of the pdf. Pdfs and I could go down a hole. You know rabbit hole about this but they are meant to be human readable and not computer readable. So we've done a lot to try to you know write scripts and things to process pdfs. But it's hard I mean I'm not a software developer and I'll be the first to admit that I you know have. And a lot of things that kind of make our lives easier around here but man there's a lot that goes into making pdfs computer readable. So kudos to money thumb and to idea for their tools because those are really useful but that can often be kind of a a headache I guess but those are things that we found work really well. I guess the other thing that I want to talk about in terms of harnessing standard data sources and this is another little point that seems like too easy to be true, but it's really helped us is with payroll reports if you can just like send the payroll provider a screenshot of like a mockup of what you're talking about I just find that there's often this communication barrier. Where we'll be asking for a certain kind of report like okay we need this information and they just kept like and this happened to us a lot before we kind of figured out this trick they would just keep sending us these pdfs that were just not like people didn't understand that a Pdf was not very usable from a computer readable standpoint or a data analytics standpoint. So like if you just kind of draft up a little excel spreadsheet with some columns that are like what you're looking for and send them a picture of that then I feel like people are like oh that's what you're looking for. Yeah we can pull something like that. And then they get you what you need and it's already in a tabular format and you just don't even have to mess with the pdfs at all. So that's kind of our little trick for getting the data you need, but sometimes that's still not possible and in that case, it's nice to have those tools that can convert pdfs. I guess the last note too and seriously cut me off at any point Leah because I know I'm rambling. But we sometimes even though these data sources are you know standard and we like them because they're really easy to use. There are components that just can't be automated and check payes are kind of the big one that we deal with regularly. Um, even if they're you know Typewritten. But. A lot of our check images and the types of cases that we work are going to have handwritten checks. Um, so scheduling those manually from check images and similarly deposit items are kind of the same thing we don't look at those for every case, but if we do need deposit items those are going to have to be manually entered. Um. But that kind of manual data processing is still fairly efficient for us compared to other kinds of manual data entry just because we've done it so many times I think we've really optimized our processes around that and again like big thanks to Donna because she's done a lot of work on that.

Leah Wietholter:

Just a couple of things that you mentioned I wanted to comment on first about kind of this data processing step being the boring step. I know I've shared on the podcast before that when I worked for the Fbi I entered data like by hand. There was no money thumb or scan writer or bank scan or idea or any of these tools back then so I was the one who hand entereded all this information and I did find it really boring I was telling some ah orarch university students last week like it was boring for the first couple of weeks I thought how am I ever going to do this for 2 years because that's how long I could work for them I thought how in the world am I ever going to do this because without falling asleep every day because it's so boring. But what I started doing was realizing that actually I had the best information first. And so if I understood the case and if I understood where the case was going, what the allegations were, then I could actually identify some of those items before anyone else saw the data. And that created a lot of opportunities and I won't like spoil it but I do talk about that in the book and just that experience and what I found being that first person to look at that data.Then the second thing I like to make sure that people understand and of course I talk about this in the book too. It's like what do we mean by date ah deposit items because whenever we're dealing with a bank they are also on a debit credit system of ah record keeping and accounting. It's kind of backwards which I won't get into for this. But basically every time there's a transaction. There's a debit side and a credit side. So what we see on the bank statements especially with like the deposit slips or the check images. We're just seeing one side of that transaction. So what we need is the corresponding not really with checks but with deposits we need the corresponding the other side of that deposit. So we either need to be able to see the cash in ticket like they will. There will actually be a ticket for the cash that was deposited or we're going to want to see the checks that were deposited with that deposit slip. So even though we just see the deposit slip on one side that doesn't give us the detail and we don't request. Deposit items on every case it depends on what we're looking for because that's one the bank charges for that research and 2 it is more manual. It's it's a lot more manual data entry than even check payees so we want to be strategic about how we use that information but sometimes we do need that. Um, so that's what we mean by deposit item and then on the check side of things or withdrawal sides of things. We also have that same ah situation with withdrawal slips. So checking withdrawal slips. There's one side that says you know this is how much I'm going to withdraw and the the. Bank account reference and then the person who is withdrawing that money so that signature and and all of that can be really valuable but there's a couple different items that actually correspond with the withdrawal slip. We can see on the statement which is a cash out ticket or a like if they withdrew. Cash using that like actual cash or cashier's checks. So those are the 2 most common. So just when we're saying deposit items or getting the item associated with and with a with a withdrawal slip. That's what we're looking for. So those are a couple things that you'll want to consider when you're doing this data processing. So all right, Rachel now that we've talked about kind of processing these standard data sources. Can you tell us how does a financial investigator harness the information in a non-standard data source.

Rachel Organist:

Yeah, so sometimes depending on the nonstandard data source. Some of the same kind of strategies that we use for standard data sources can actually apply depending on you know, just what it is how tidy it is you know best case scenario your data is already in kind of a digital tabular format. But maybe it just has some non-standard fields. Um, then I'd say probably the most important thing really The only thing you're going to have to do is just make sure that you understand what each field contains and how it's generated. Sometimes your client doesn't even know this and so then you know there's only so much he can do but um. Asking your client for a simplified. What's called a data dictionary kind of a list of all the fields with that information of what the data is and where it comes from that can be just really helpful. Even if you kind of create one for them again. We're always trying to make things easier for the people that we're requesting data from because that. You know, usually increases the speed at which we're going to get it and the likelihood that we will actually get what we're asking for. But you know even if you just send your client a list of all the fields in an excel spreadsheet or a Google sheet and say hey can you just fill this out and tell me what these are I always regret if we don't do that upfront because you're just going to end up struggling later. I also really recommend dropping fields that you don't need you know, save always save the original version that includes all the fields but a lot of times you'll find and we even have an idea macro for this now that just looks for fields that aren't even used because sometimes in big data sets. There'll just be a bunch that are blank you know just exclude ones that you know you don't really care about. So that's kind of one of our best practices around that. Ah, other than that sources that need to be digitized or scheduled and I want to talk a little bit in a minute too about how we always use the word schedule but um, a lot of times. The part that takes the most thought is just setting up the initial template of what you want your final data to look like so kind of having that excel spreadsheet in mind. What's your endpoint and then how you're going to map the fields from your original data source whether it's handwritten or scanned or whatever. Mapping that into your tabular formatted template is can putting a little bit of thought into that upfront can often um, really make things easier in the long run and thinking about what you're going to want to do with the data how you might want to be able to sort or filter or group by or extract from it. You know in whatever your next steps are. And that'll help you decide how to arrange it and um, just kind of being an excel power user honestly becoming excel savvy with those different formulas and stuff can really help in speeding that kind of data entry to I know that Donna and I have taught each other several tricks along the way speeding up populating different fields or things that are repeated or that kind of thing. So I I wanted to talk a little bit kind of along those lines and this applies to both standard and nonstandard data scheduling. But why we use the word schedule all the time. Um, and then just some of our best practices around like q seeing data and data cleanup and I know you'll have comments on this too. Leah. But um, the word schedule was new to me when I kind of came into this world of forensia accounting and fraud investigation because my background's not in accounting. But um, we just kind of use that you know accountants love schedules basically tables of financial information and so we just kind of use that term to in it and it's you know. Means what it means it's just tables of financial information but basically getting you can start with any variety of data sources. You know sometimes they look messy or we've talked about their handwritten or whatever but getting those into a schedule is really kind of how you want to think about all this data processing and like that's usually. What we're doing and whether we're scheduling accounts or payroll data or whatnot and so the schedule is kind of at the heart of everything we do you know around data processing and then the other thing with those schedules. You know this is important with both standard and non-standard data sources but coming up with a system for q seeing your data entry using check figures is just really important because if you don't have your data translated accurately to a digital format then any analysis you do on that digitized data is going to be garbage. So or I mean that may be dramatic. It's not going to be reliable and everything that we do has to be reliable and it has to stand up in court so making sure that you have a robust plan for q see cu seeing your data and always using those ah check totals is I think really important.

Leah Wietholter:

And yeah, actually my only comment was going to make sure you talked about check figures and making sure to put that in your data schedule your account schedule so that if you are working as a team that your team knows that you've checked this and compared this to the original Data source. So that that ties out. So when preparing a data set or a schedule of either standard or non-standard data sources, do you process only key transactions or all transactions or how do you make that decision?

Rachel Organist:

This is such a great topic because the answer depends we have a primary best answer to this question and and it's kind of the opposite of sometimes when people think but then there's some nuance to it as Well. So a lot of times people intuitively only want to send us this subset of key transactions. Whether that's payroll that's only associated with the person that they suspect of committing fraud or purchase orders that they already think are bad because they came from a specific department people. Love to just already and I think part of it too is that a lot of people especially when they. Feeling really upset and personally wronged by whatever the fraud was they've already kind of done some of their own level of investigation. Um, and so they think hey I gave you guys a head start like can I just send you what I found or these things that I think are bad but or highest risk. But there are kind of 2 problems with this one is that we can't show or effectively demonstrate. What's bad unless we compare it to what's good or normal. Um, and then there's also the risk that you are going to miss additional fraudulent transactions that. You know, maybe used a different scheme or the scheme was broader than you originally thought and I think we talked a little bit about this in case planning. I know you talked about it in the book. Um, but that going in with two and we talked about a lot in our trainings around case planning is if you go in with too narrow of a mindset that you think you know what schemes being perpetrated Or. Um, where exactly the fraud is. It's good to have that idea of what's highest risk. But um, if you only look at that to the exclusion of everything else then there's the possibility that you'll miss some portions of the Loss. So I was thinking about a case that we actually talked about I think it was in the last episode. Um, where our client was the business owner and his employee had embezzled and he she had flagged. He had she had confessed to him about the fraud and he had had her like go through the bank statements and specifically flag things that she admitted were fraudulent. But you know we talked to him and said I know that you think you already have this list of what's bad but just let us look at all of the bank statements. All of the transactions and lo and behold. We did find additional transactions that you know, kind of hit various red flags or various tests for being. High risk and did turn out to be part of the loss and things that didn't benefit his business So that's the other big reason that you really want to look at all the transactions if at all possible. That being said, the nuance to it is that sometimes you know there are real budgetary limitations, technical feasibility limitations. Sometimes you just it's not reasonable to process all the data and that can be a really hard decision to make a really common place where we make that decision is like we just talked about how scheduling check payees is a very manual process. Sometimes we'll set a dollar amount threshold where we're only going to schedule payees for checks that are over a certain dollar amount and that just depends on the case and what we're looking for but that can be a way to keep your data processing costs down. If we're looking at payroll data again sometimes that can be kind of tricky to process. So if it is something that's going to be heavily manual. We might only want to process it for key employees that we think are highest risk are most likely to be involved in the fraud versus for all employees. So there are obviously a lot of factors that go into those kinds of decisions. You know what form is the data in, what level of processing is needed, is it very time-intensive manual, are they paper digital documents, are they scanned Pdf versus are they able to be Ocr. How bad is the handwriting honestly a huge factor in how long something's going to take what's the client's budget. What's the level of risk in the transactions that you're going to decide to not digitize. What's this is a big one too. What's your ability to use check figures for quality control if you don't schedule everything and that one often it makes a decision really tough if you think I don't if I don't schedule all this I don't have a great way to tie it out. Um, so there's that risk there that you might miss some data entry errors. So in my opinion processing all transactions is always ideal and I think that's where the data sleuth process really adds a lot of value but sadly sometimes it is not feasible.

Leah Wietholter:

Yes, it's true. Um, well on that note, let's just take a quick break and we'll be right back.

Ad:

Hi, everyone. It's Leah. My new book, Data Sleuth: Using Data and Forensic Accounting, Engagements, and Fraud Investigations launches April 19th. To celebrate, we're giving away 10 signed copies during each of our April 5th and April 19th episodes. With 20 chances to win, you do not want to miss out. To be sure you're in the drawing, subscribe to the podcast and turn on alerts to be the first to know when the episodes drop.

Leah Wietholter:

Welcome back to my discussion with Rachel about our data sleuth analysis and our first like after data processing our first analysis that we're going to talk about is comparative analysis. So Rachel I'm just going to kind of let you loose on this topic. What is comparative analysis in a financial investigation?

Rachel Organist:

Love it so much. This is one my favorite things to talk about so when we talk about a comparative analysis. It's really not 1 specific type of analysis I'd say it's kind of more of a category of analysis. And really, there's kind of 2 categories of comparative analysis I'd say or 2 frameworks that we think about this kind of thing that we use. So sometimes we're comparing what happened versus what should have happened and if you hang out in our office or on our Google meet calls. You will hear us say this all the time. Um, what happened versus what should have happened and then the other kind of framework for putting together. Comparative analysis is ah. When you're comparing best evidence data sources that were out of the subjects control versus data sources that were controlled by the subject and we love to this is a podcast so you can't see our Venn diagram but we love to use Venn diagrams when we do trainings about this kind of thing. But really what you're looking for are going to be kind of the differences between well sometimes they're looking for things that are in both data sets. But more commonly I'd say when you're looking for the loss you're looking for things that exist in one data set but not in the other one. So sometimes those 2 frameworks or those 2 type of analysis are actually going to be the same thing. Like a really kind of common example is if you want to look at the bank statements versus some kind of sales record data set that could potentially both be what happened versus what should have happened and best evidence versus subject control data because the bank statements are going to tell you what happened like what was what. Was actually deposited to the businesses account whereas the sales records are going to say what should have happened. Potentially you know what should have been deposited to the big businesses account because what sales were made and then conversely it could also be that bank statements are going to show you what really happened like what the subject didn't control like those are always our best evidence. And then the subject control data is going to be the sales records they could have you know made changes to that data source that aren't reflected in the bank statements but the bank statements are still going to be best evidence so it just kind of depends on the particular case I'd say which of those frameworks is. More helpful sometimes you kind of want to use both.

Leah Wietholter:

And yeah, and sometimes really clearly identifying especially when working in a team which data source are we saying represents what happened and which data source are we saying should have happened. Ran into this not long ago with working with somebody that they said well I would say that this supports what should have happened and this supports What happened it doesn't really Matter. It's just making sure that we're comparing data sources that actually um, that one verifies the other or shows us the differences. Like we don't want to compare a data source against itself and I talk about that in the book that I remember as part of some audit steps that I had to do when I was doing more traditional auditing I remember that I was comparing ah check copies I think. I was comparing like a system report to the check steps. That's what it was and that system generated those check steps. So I was comparing a data source to itself which isn't going to tell me anything so I remember telling the manager like I mean if this is. Step I'll do it but this is not going to actually create any value here because you know what would be better is if I took the system report or even the check stubs and compared it to what cleared the bank account like that would be a better comparative more valuable more meaningful comparative analysis.

Rachel Organist:

Right. Yeah and kind of on the other end of the spectrum too that made me think you also want to make sure that you're comparing 2 things that theoretically should match. You know if there were no fraud or that you understand because a lot of times I find myself asking the client. Well if there's a discrepancy between these two data sets. What does that mean. Or is there a reason that these two data sets would ever not match because sometimes there are you know, operational reasons or just because of how they use the data that they don't match but you want to make sure you understand yeah that you're a that. You're actually comparing 2 different things and B that you understand. Where you do expect them to match so that any discrepancies you can you know accurately interpret what the meaning of those is.

Leah Wietholter:

Yeah, and and we've talked lately about another kind of use of comparative analysis where we're not taking two data sets. Ah that one might verify the other or that should have matching but where we kind of have to build a data set. Based on best evidence. So do you want to talk about that a little bit?

Rachel Organist:

Yeah, absolutely and so there are a few cases that came to mind right away where we've done this? But I'm sure there are more but yeah, sometimes the what should have happened side is not something that exists but we have to calculate it based on a contract or. Um, some other documentation some set of rules that the client provides us with the first one I thought about was the class action wage dispute that we did a couple years ago so to give a very simple description of that case. The plaintiffs should have been given 1 paid 15 minute rest break per 4 hours that they worked. So we had to take the time data to look at you know the shifts that they work and calculate worked excuse me and calculate how many rest breaks should have been taken and then we could compare that to the rest breaks that were actually taken so in that case, there was no dataset that showed these were the rest breaks that they should have taken. We had to calculate that based on other data. Another one that we did was a mortgage servicer and so we were trying to identify whether funds had been diverted or removed from escrow accounts so we wanted to calculate what the running balance in the escrow accounts should have been. And to do that we just used the escar documentation like you know if you have a mortgage you get those letters from your servicer that say here's what you're going to put into escar this month and then on the flip side of things we could look at you know what tax and insurance payments should have actually been made by the servicer. And we could calculate those 2 together to see what the balance should have been in the escrow account and then compare it to the balance that was actually there and then when we were talking about those 2 cases Leah actually pointed out a third one that was in a state case. Where the issue was how should these various expenses of the estate have been apportioned to the different heirs based on their proportion of the overall bequest and so we had to calculate that out based on you know, say 1 heir got 20% of the estate another heir got 20% and a third heir cut. 60% well we had to calculate how the expenses should have been apportioned and then compare that to how the expenses were actually paid out so that's actually. More common than we kind of realized when we first started talking about this so you have to get a little creative sometimes that what should have happened side doesn't already exist but you have to put it together.

Leah Wietholter:

Yeah, sometimes it's as easy as taking quickbooks and you know what happened in Quickbooks and comparing it I mean sorry yeah, what happened in Quickbooks and then comparing that to what actually happened in the bank statement and finding those differences. Oh. This is you know and and really at the end of the day that's a bank reconciliation real on its like simplest in its simplest state. It's a bank reconciliation. So sometimes we do that with different data sets. But sometimes there like you said I think I think the best way to explain it is there is no dataset that can be exported for us. Or provided in some type of report that then we process like we talked about and so we have to build it on best evidence and I think the first time I realized that this was something I was doing was on Ah, it's. Pretty old case now it was maybe 1 of the first ones that you and the team worked whenever you first got here, but we we had a guy that needed to know how much cash had his or revenue did his company collect in a divorce matter and all they had were paper receipts. So You mentioned that in our data processing segment. And that reminded me too and so we had these receipts but even the receipts weren't complete so we kind of had to just say Okay, what's our most reliable number like for sales. What's our most reliable number for expenses. How much were paid out to the people to the contractors and to kind of get down to okay this is how much should have made it to the bank account at the end of this period and it's not going to be perfect, but it's still a comparative Analysis. We're still creating this Baseline or foundation to say. Okay, this is what should have happened. And then let's compare it to what actually happened and then we're going to have our loss or we may just have the answer like this is there may not even be a loss for him. It could it was kind of ah this is how much you should have received over this period and we didn't even do the comparison part but that's where understanding how a business operates. Ah, to collect their revenue or pay expenses is really important into comparative analysis understanding like what those data sources represent like we talked about in data processing. Um, what do these fields actually mean because we may have to create calculations that then provides that baseline for us.

Rachel Organist:

Yeah, and the other thing I was thinking about too I think those kinds of cases are often the most fun just because of the creative problem solving that's kind of required but and that comparative Analysis Framework can also just mean something as simple as taking a list of. Disbursements I'm thinking of this other case with the investment or trust investment accounts. That was our client was the kind of ah, not a trustee but essentially an investment company I guess that held funds for these different trusts and there were disbursements that were made from a customer's account that shouldn't have been made. But they had a list of accounts that did belong to the customer like these would have been appropriate beneficiaries for the disbursements and then anything not on that list was you know, probably part of the loss or not okay, where did it go So That's a really easy one. You just take a list of transactions with their you know destination or beneficiary account number. Compare it to the list of good account numbers and anything not on that list is part of the loss. So That's really simple. Um, even what we'll talk about next time with our interesting data findings and our source in use and those kind of analyses that involve client feedback in a way. They're all comparative analyses because you're comparing. What actually went out of the client's bank account to what should have gone out of the client's bank account What types of expenses were appropriate to have been paid out of this account or did benefit the business or um, that kind of thing. So really, everything is a comparative analysis. Not everything but it's kind of once you start thinking about it. A huge proportion of what we do kind of fits within that what happened versus what should have happened framework.

Leah Wietholter:

Yeah, so true. Well Rachel thank you so much for sharing your tips and tricks and just really how much you love data analysis and data processing and anything related to data. It's been great talking with you and we'll talk about more next time.

Rachel Organist:

Looking forward to it.

Outro:

Thank you for listening to The Investigation Game. For more information on any of the topics brought up on this show, visit workmanforensics.com. If you enjoyed our show, be sure to subscribe and leave a review. You can also connect with us on any social media platform by searching "Workman Forensics." If you have any questions or topic ideas, please email us at podcast@workmanforensics.com. Thank you.

Guest UserComment