I was working with someone from my local user group recently. He was dealing with a slow data import process. The Use Case called for any user to be able to import data from an excel file into the custom app and then work with that data. And it was slow. We took a look at it and talked through it together. A few hours later, the import was smooth and quick, and the data was available to all users shortly after the import finished. Let’s take a look at best practices for a FileMaker data import.

In this post, we’ll talk about the target table: the table that will receive the data from the FileMaker Data Import. In a future post, we’ll talk about the best practices of the actual import process: how to script it so any user can perform an import.

The Target Table

The first question we should ask: into which table will the data be imported?  It seems obvious: the table which holds the data. If I’m adding new students to a student information system, it seems I should directly import them into the Students table.

But that’s not always the case. There are a few reasons why we want to, at first, steer clear of the table that holds the data:

  • A table with many calculated fields will slow the import to a crawl. Even though the calc fields are not shown, the import forces the calcs to run.
  • The data might be wrong. It is possible to forget some columns or rearrange the columns during an import.
  • The data might need additional processing. Calculated fields shouldn’t process the data, but sometimes processing needs to happen.

So instead of the actual data table (importing students into the student table), FileMaker (and other system) developers will direct the FileMaker data import to happen in a temporary table.

A Temporary Table

The temporary table idea is worth exploring, and we’ll do a deep dive on it here.

A temp table has the following characteristics:

  • It is a table in your custom app or in another file.
  • The fields in this table are static fields: number, date, text fields. There are no calculated fields.
  • The fields in this table match the columns in the import source.
  • The scripting directs the import source to import the data here.

Since tables and fields are cheap, you can have one table per use case. If you have to import students and test scores, without issue, you can have one temp table for each.

The Advantages

Temporary tables simply accept the data and thus have great advantages.

Quick Import

Importing into a table with no calculated fields is quick. The data simply gets added to the table.

Post-Import Validation

Once the data is imported into a temporary table, it can be validated before it gets placed into the real table. And this is a good reason to use a temp table. We don’t want data to be imported that is incorrect.

User Error

A school administrator needs to import test scores into his student information system. These scores, of course, are scores of his current students using the students’ district IDs. Somehow the import gets messed up. The administrator accidentally disturbs the order of the import: instead of the “StudentID” column being imported into the matching field, this column gets imported into the scores field. A validation process could go through the temp table’s data to ensure that all studentIDs in that field match an existing student. If not, the record shouldn’t be placed into the “Scores” table.

Data Error

The data might be wrong. There might be missing fields or missing data, there might be incorrect data in the columns. A whole host of things could go wrong. Since we, as the developers of the system, may not control the source of the data, we have to make sure the data is error-free

In either case, we want to script the process to handle the validation of each column and each row.  Our scripted process could then generate a report  to show the importer which records were rejected and why.

Post-Import Processing

Validated data could further undergo some processing as it is being moved to the correct tables. Here are some possible processing:

  • Duplicates are removed.
  • Phone numbers, email addresses could be added to a Contacts table.
  • Scripted calculations could be made and the result placed into a static field.
  • Add new records

The amount of processing that can happen is unlimited.

Session-Specific

It is possible for two users to want to import data into the same table at the same time. Maybe they’re importing the same data, maybe one is importing Test 1 scores, and the other is importing Test 2 scores.

In either case, there’s a strong case to be made for using a temporary table. Both users can import the data; the data doesn’t interfere with each other and can be processed in turn.

Often, users are importing their data in a session. As part of the process, a session record is created, and the user’s import is tied to that session. Sessions are good for reporting out validation errors. My script can record that Session 32 had six rejected records.

FileMaker Data Import: But I want it now!

From a user’s perspective, a FileMaker data import, in the manner described, seems like a lot. First the data is imported into a temp table. Then it is validated to some degree. Finally, it is processed into the correct tables.

Yep. The above descriptions do take some time. Users may balk at that. But let’s consider the alternatives:

  1. The data is imported into the actual tables where resides many calculated fields. The user sees the “Importing” dialog for many minutes. She can do no other work in FileMaker.
  2. The data is imported incorrectly into the actual tables. The test score column ends up in the studentID column.
  3. The data is imported into the actual tables,  but the data source contains extra columns that need to be placed into related tables.

Patience is a (common) virtue

When I was a teacher I also worked as the database administrator (two full-time jobs). Every year I’d have to set up our external services with student data. Those online services, from big testing and student-organization companies, would ask me to upload the file to a site and then WAIT for up to 24 hours until the data was processed. Inevitably I’d get a report back saying some records (or an entire import session) was rejected with clear reasons why it was rejected.

I think it is reasonable to ask users to wait a few minutes or hours for their imported data to show up in the custom app. I think they’d rather have the data validated and in the correct tables and fields than see wrong data instantly.

FileMaker to the rescue

Luckily we have tools available to us to validate and process the data quickly. We can use Server Side Schedules, Perform Script on Server (PSOS, pronounced P.S.O.S). We even have available to us JavaScript  or external microservices to help process the data in some way.

So we’re not out of luck. Users get to see their correct data sometime after their import.

 

The target table is an important consideration when designing an import process. It seems to me a temporary table is the right choice for all user-driven imports.

In the next post, we’ll take a look at other user-specific needs for an import process. Stay tuned.

Knowing how to use a FileMaker Join table ( though not exclusive to FileMaker) can solve many data-model related questions for us. As we fiddle with the Entity Relationship Diagram to work out the exact relationship of entities for our custom apps, we can rely on this join table technique to properly structure seemingly complex data models. Let’s take a look at how this concept is used in our work.

What does a FileMaker Join Table Solve?

The join table, as it is commonly called and known, solves the complex problem of how to structure the data model when the entities are related in a many-to-many style. A classic example of these entities includes classes and people. Many people have many classes. A join table joins these two entities together.

The Entity-Relationship Diagram model of a many-to-many relationship

The join table shows the data related correctly with no duplication of records. If I needed to set up a roster of students, and I didn’t use a join table, I’d have to duplicate either student or class records. For example, in the class table, I’d have to have one record (with the same class) per student. In the Classes table? That’s a problem. Classes is an entity: one record per unique class. I shouldn’t duplicate class records to just make room for all the people.

What is a Join Table?

Join tables are simple tables that hold the primary keys of the tables it is joining together. In the above example, a join table has a field holding the key for the Person record. It also contains a field for the key of the classes record.

The Join table, joining Students and Classes.

This table holds one record for every unique combination of the joining entities. In this case, it holds one record for each student and for each class they are in. If Jamie is in five classes, this table holds five records with Jamie’s primary key value and each one of her classes’ primary key values.

Many records for students taking many classes.

Naming a Join Table

It’s a small but vital point: naming a join table. It’s actually really difficult to name things, at least it seems so. Here are some naming options:

  • The unoriginal but descriptive and easy-to-generate name with the two joining tables concatenated: “People_Classes”. That works. It tells you what’s in the table. But “People_Classes” as a name does not describe its entity: what is in this table. What this table represents.
  • The hard-to-generate name that describes the purpose of the table, what the table is holding. Instead of “People_Classes” I’d use “Enrollment” or “Register” or something like that.

Pick a naming style. I think the 2nd choice is better. My future self will enjoy seeing “Enrollment” vs “People_Classes”, I think.

Viewing Records

Once the data is set up, the next step is to create the viewing area. A FileMaker join table gives us the freedom to view its contents (and thus the correct data) from both sides of the join table or from the table itself.

On a student record

From a student record, that is, from the Students table occurrence, we can see all of the classes in which she is enrolled. And we can do this through, as usual, a portal. But what from what table occurrence will the portal be drawn? The description above gives the answer: “. . . in which she is enrolled.” The portal on the student layout comes from the enrollment table. We’ve established that the enrollment table holds one record per class per student, so one student’s class list is found here.

However, as I described above, this enrollment table only contains foreign key fields. From where does the class name come? Simple: we add the class name field from the classes table through this relationship.

On the Classes Record

From the classes table occurrence, we can see all the students in each class. Again, a portal sourced from the Enrollment table is set up. And by adding the student name field into the portal, we see all the students in the class.

From the FileMaker Join Table

The join table is best used for a list, a report of the students in each class. Since we need the student name and the class name, we simply add these fields to the report layout.

Adding Records

So how are records added to this FileMaker join table?

Let’s take a typical Use Case. The principal of a school needs to enroll students in classes. She would go to the classes layout and, one at a time, add students to classes. (Forget the fact that this is terribly slow 🙂 ). So she starts on “Algebra” and needs to add students. So where is the new record created?

If you stop and think about it, there’s only one place: in the join table, I mean the “enrollment” table. In this table we create a record with the current class’s primary key and a student’s ID.

So then the script would follow these steps:

  1. Get the ID of the class in a variable.
  2. Get the ID of the selected student in some manner in a variable
  3. In the join table, Enrollment, create a new record.
  4. Set the foreign key fields with the variable data
  5. Close the window.
  6. Refresh the current layout.

The demo file attached shows my scripting method. Look at the Classes layout. Step 2 above is the most challenging of the steps (though it isn’t challenging at all).

I’ve also, on the Student layout, used another method for adding records to the join table.

The Problem Solver’s Data Structure

The Join table solves a complex problem for us. It allows our clients to see complex relationships in their data. And it is a high-level tool that gives us more power at our fingertips.

In this video, I illustrate the above post.

Download the sample file here.

 

The Workplace Innovation Platform we know, called FileMaker, is exactly that: A platform. There’s many components in the platform. As we add those components to support all the clients’ needs in the custom app, we have to take special care about each. One thought to keep in mind is the script step compatibility. We have to make sure each step in each works in the platform in which we intend. So let’s take a look at FileMaker script compatibility.

FileMaker Go, FileMaker Pro Advanced (on macOS and Windows), FileMaker WebDirect, FileMaker Server, and the FileMaker Data API work very similarly. But there are some differences whether or not script steps work. Some steps are not supported in a part of the platform, and some steps are partially supported. It would be mind-boggling to remember which steps work where, so the script workspace gives us a tool to use.

Many of my script steps do not work for the FileMaker Data API component.

At the top right of the script workspace is a button that opens this dropdown. We can use it to examine a current script to see which steps are not compatible with the selected component. You’ll see any incompatible script steps grayed out.

Additionally, we can use this dropdown to examine the script step list to see which steps are compatible.

So the lesson is: when we are writing a script, we should use the FileMaker script compatibility checker to make sure each step will work. We need to design scripts that are the best for each part of the platform. We need to review existing scripts to make sure each step will perform correctly in the chosen components.

Scripting smarter

Scripting compatibility sounds like a lot of work. Either I’ll have to create one script per component to do the same thing, or I’ll have to have a lot of logic inside a script to handle all the possible places a script will be run. But it really isn’t too hard. And if you think about it, what’s worse: more work or incorrectly-performed scripts?

Here are some strategies you can use to ensure you’re scripting smarter: efficiently and effectively.

Get to know the components

Many seasoned developers have an encyclopedic knowledge of the entire platform, and that includes knowing which steps are compatible with a part of the platform. This simply comes with experience. I don’t think it’s that these developers know all the compatible steps for, say, FileMaker server, but they know what FileMaker Server can and cannot do.

For example, we know two ways to run scripts with FileMaker Server: Perform Script on Server or Schedule Scripts. Each of these uses the scripting engine inside Server. It opens an instance the custom app only in memory. No UI is drawn. So we have to consider what this means. There’s no window, so the Move/Resize Window step is useless. That’s why it’s not compatible with server. (if you do use this step on server, nothing bad will happen. We’ll discuss its consequences further down).

Likewise, the script step Get Directory does not work in FileMaker WebDirect. We know this because that component does not have access to a client’s file system. FileMaker developers in the game for a long time know this.

One caveat

FileMaker, Inc. continually updates their platform’s functionality, and thus FileMaker script compatibility changes. Script steps that were not compatible in a part of the platform in the past are now compatible. The script step “Save Records as PDF”, back in the old days of FileMaker 15 and earlier, was not compatible on server. Since FileMaker 16, it has partial support.

Once you know how a component of the platform performs or works, you can more efficiently pick those steps that will work and work around those steps that you need that won’t work (if possible).

Write Component-targeted scripts

This strategy involves writing a script for each component that will be used. If I need a process to run using the FileMaker Data API, I should write a script just for that process and component. Even if that same process is used in FileMaker on Windows, there is a benefit to having two separate scripts that do the same thing (roughly) that are customized to that component’s script compatibility.

My script workspace might have folders for each component:

  • Go
  • API
  • WebDirect

and so forth that hold specific scripts.

To start, I’d build a script that works completely on the most common component used in my custom app. I might start with FileMaker Pro Advanced on macOS. I’d build the script, duplicate it, and adjust it for the Data API as necessary.

In-Script Logic

Another strategy to use when working with FileMaker script compatibility is to use logic in your scripts that skip over incompatible script steps for any component. This involves checking for the component that is running the script, and then, at each step in question, skipping over it or performing a different set of steps that do work for that component. The functions Get(SystemPlatform) and Get(Device) are good candidates for component-checking. There are also custom functions that provide this functionality as well. _IsWin will return 1 if a Windows machine is running the script.

The Consequences

When the FileMaker runs a script, it will skip over any steps incompatible with that component. Sometimes that isn’t a problem. Show Custom Dialog is not compatible on FileMaker Server, and Server will skip that step. That’s okay if the script step was meant to show a success message. If, on the other hand, the step included input fields which are used in the rest of the script, there’s a problem. So you have to consider each incompatible script step and decide if the script will break if the process is incompatible.

What is ‘Partial’?

There are quite a few script steps that have listed “partial” in the supported column. Save Records as PDF is one of those. It is partially supported in FileMaker Go, FileMaker WebDirect, and FileMaker Server & Cloud. It seems odd. Why would a step be partially supported? As with anyone’s enjoyment of brussel sprouts, support seems binary. No one ‘partially’ enjoys the gross-smelling green sphere.

Well it turns out FileMaker can support some parts of some script steps. These steps happen to be ones with many options. The Script Workspace is helpful in showing which parts are supported and which are not.

When we encounter a partially-supported step, we can use FileMaker’s documentation to review the partial stuff. (By the way, did you know you can get to the documentation by right-clicking on a step in the Steps menu and choosing “Help”? That’s cool.) The Partial information is found in the notes.

In this case, Save Records as PDF is supported, but the dialog will not show up when this step is run from FileMaker Server. So it is wise to review the notes for partially-supported steps.

FileMaker Script Compatibility: Write scripts with all the tools

FileMaker Script Compatibility is an essential part of every developer’s skill-set. Whether she knows how each of the components works and their functionality limitations or she uses the compatibility checker, it is vital that each script runs successfully and does what it intends to do in every part of the platform.

We all want our custom apps to be faster, right? As problem solvers, we design our system and complex processes to happen quickly so that the user can get on with her task. Well one method we have at our fingertips is FileMaker Perform Script on Server. Let’s discuss this script step, its features and things to watch out for. As we study it and use it properly, we’ll use this script step like a boss.

Filemaker Perform Script on Server

This small-seeming script step was introduced in FileMaker 13. Its purpose is to, as it is called, perform a script on the server. Of course this only applies if a file is actually hosted. You can specify a script in the current file or external data source file, run it up on the server, and get back the result. We’ll look at why this was such a game changer below.

Perform Script on Server (P.S.O.S., as FileMaker devs like to call it) works in FileMaker Pro, FileMaker Go, and WebDirect. It is not supported in Runtimes, and it doesn’t make sense in Server schedules–scheduled scripts already perform on the server.

Along with the script name and the parameter, you can specify to “Wait for Completion”. We’ll get into this more later.

The Advantage

Complex tasks take up a lot of computing power and time when run against hosted data on a client machine. As of FileMaker 15, records are cached in the local machine, but FileMaker still might have to fetch new records, process them in some way, and then send the updated records back to the server. That’s a lot. FileMaker Perform Script on Server does the complex tasks right there on the server, where the records are actually sitting. There’s no transfer of data back and forth. Server simply does the work, then updates the client cache with any changed data. The difference in time between the same script running on the client and the server is noticeable.

With great power . . .

As with most things in life, when you’re given a great tool, precautions must be taken and your eyes need to be extra vigilant. Perform Script on Server, too must be handled carefully.

Who is performing the script?

When the PSOS script step is called, it opens the file on the server with the same security as the current user invoking this script step. If I log into the hosted file on my machine as “jbrown” with a privilege set of “HR”, any PSOS script I run will operate with those same security settings. The log in for PSOS is very similar as mine in terms of security, but nowhere is a UI drawn as I would see it on my machine. FileMaker Server can open the file and perform the script with no interface. This login also opens up and runs the onFirstWindowOpen script. If the first-run script sets globals and go to a “dashboard” layout, then PSOS will do the same. Even though it logs in with my username and password, it still is the server running the script. So any functions that return the current time/date, will return Server’s current time and date.

Context is king

FileMaker Server is powerful, but when a PSOS script is called and FMS begins the work, it has no idea where to begin. At the end of the PSOS’s login process, the script is on the onOpen layout, but that’s often not the context in which you want to perform some processing data. So you need to explicitly add a script step in the PSOS script to go to a specific layout. Additionally, it is vital to make sure all globals used in the rest of the script are updated as necessary.

Missing Values

Likewise, any values set in fields with global storage or global variables will be empty. Even if a global field or variable was set in the client file, they do not exist in the PSOS session.

Which records?

Just like the context and global values, the FileMaker Perform Script on Server script doesn’t know which records upon which to work. The script running on the client, the one that called PSOS in the first place may have found some records, but that means nothing to the work done on the server. So in some way, PSOS needs to know which records to find. And you can do this in many ways. Here are just a few:

  • Do the find of records in the PSOS script itself. Let FileMaker Server find the records. But of course, you’ll have to tell the PSOS script which records to find. And that can be done by passing the find criteria in as a parameter of the PSOS script.
  • Do the find in the script performed on the client. Pass the primary keys of the entire found set up to the FileMaker Perform Script on Server script. Include in this script a layout on which is a field. In this field is placed the primary key list. FInally use Go To Related Records to go to the correct context from the starting point of this field.

What edits to make

Once PSOS has the found set, it is vital to tell it what to do with these records, what updates to make.  Again, since the client script probably has that info, the PSOS script step must pass this information up to the server. Along with the find criteria, I’d pass this up to PSOS as a JSON object. We often use fields with global storage as places people can enter find criteria. Those aren’t available in the PSOS, so we have to pass it up. Here’s a sample of what I did while working with my user group colleague.

The first item was the find criteria. The remaining items were used to update the found set of records. Both the find criteria and the update info were generated in the client-run script based on user choices.

Locked Records

Just as a client-run script, if a record is locked, the FileMaker Perform Script on Server script cannot update that record. If we’re in the middle of a loop and the 47th record is locked, it won’t get updated. We have to handle that in some way.

We advocate a transactional approach to handle records that might be locked. If all of them should be processed or none of them, transactions are the way to go.

Errors

PSOS cannot be debugged in the normal sense using the Data Viewer and Script Debugger, so instead we have to find all the errors in our code and eliminate those. But we also have to prepare for unexpected events. To the former, as we work on the script, we can choose to run it on the client instead and debug it using the normal tools. We work through the script and make sure that everything works as expected. We also make sure that the script goes to the right context and finds the right records.

Unexpected events happen, and unexpected events could happen in the PSOS script, and that could cause major issues. So we have to be extra diligent in bailing out of the script whenever an error is encountered that would break the rest of the script processing. Here are a few times you’d want to bail out of the PSOS script when an error occurs

Possible Errors to check

  • There are no records in the found set.
  • The PSOS session fails to go to the correct layout
  • A record fails to update (though you may want to handle this differently. Rather than bailing out of the PSOS script, simply log the record that is uneditable).

The bail out process is pretty simple: After setting Error Capture to be On, get the error of the previous step. If it is not 0, then exit the script with that error code as the exit parameter:

Set Error Capture [On]
…
Perform Find
Set Variable [$json ; JSONSetElement ("" ; "error" ; Get(LastError) ; JSONNumber )]
If $error <> 0
Exit script [ $json]
In the client script, the one that called the PSOS script, use Get(ScriptResult) and see what error took place: JSONGetElement ( $result ; "error").

Wait for it

Finally, the option to Wait for Completion in the FileMaker Perform Script on Server, which is on by default, simply tells the calling script to wait for the PSOS script to finish its work. In most cases, you want this. The script on the server is working, processing a bunch of records. That process needs to finish before the client script can continue. But it is possible to uncheck this option and allow the script to work on its own. There seem to be a smaller number of times when you’d want this, but it is possible.

Perform Script on Server: Use it

FileMaker Perform Script on Server is a powerful tool that offloads complex processes to the server. When used properly and when considered carefully, its use can speed up the processing of data for your users.

 

FileMaker Pro Advanced gives us developers to create our custom function, our own calculation that returns a result. These are FileMaker Custom Functions. Using custom functions is a boss-level technique, and we all now get to use them and share them. With great power comes great thinking: you as a developer get to decide when/how to use them. But first, let’s take a look at how to work with custom functions.

FileMaker custom functions

Custom functions are lines of code that we write that returns a simple or complex result. They are found in the “File / Manage Custom Functions” menu.

FIleMaker Custom Function dialog

Here is where you create a custom function

Custom Functions have a few characteristics to them:

  • They require a name.
  • They can accept parameters, but don’t have to.
  • Some sort of calculation is required.
  • They are file specific
  • At this time, the custom function dialog is not type-ahead enabled, as is a regular calc dialog.
  • They can return one result or a recursive result.

Let’s look at each of these.

Require a name

Once a name is given, the custom function is available in both the functions list (under custom functions) or as part of the typeahead. Some folks clearly identify a custom function in its name with some prefix. I tend to use “_” at the beginning: “_CF_BeginningOfWeek”, (a function that returns the date of the first day of a week) but the structure can be anything, even containing spaces between words.

Accept parameters

Parameters are bits of information we pass into the custom function and is used in the calculation. These parameters are in the ( ) that appears after the name.. For example in the “_CF_BeginningOfWeek ( anyDate)” custom function, I am passing in a date. I’ve named it “anyDate” but that’s what it could be. I could work with any of these:

_CF_BeginningOfWeek ( Get ( CurrentDate ) )
_CF_BeginningOfWeek ( Date ( 3 ; 19 ;2018 ) )
_CF_BeginningOfWeek ( “5/8/2018” )

What’s in the parentheses gets passed into the custom function and is used throughout. For example, in this custom function, Date ( 3; 19; 2018) is passed into the calculation, and this value is assigned to the variable “anyDate”, the one I put in the CF set up.

 
Let ([
 _dow = DayOfWeek ( anyDate ); //anyDate = 3/19/2018
 _first = _dow - (_dow - 1);
 _end = ""
];
GetAsDate (_date - _first)
)

A CF can contain any number of parameters, including zero. Many folks create custom functions that turn the numerical returned value from a FileMaker function into something readable. The FileMaker function Get ( Device ) returns 1 if the user is on the mac. Would you remember 1 being the Mac (especially is your device of choice is a machine with the Windows OS)?

So folks will take this function into a custom function and call the custom function instead.

One other note about parameters, you can use any number of parameters, and each one must contain something passed to it. Look at this custom function called _IsOverlap. Its purpose is to determine if two date ranges overlap.


As I built this for a project, I realized that sometimes there is no end date for one of the ranges. I still need this to work, so I have to pass something into the custom function for the end dates. So inside the custom function, I’m checking to see if the EndDate parameters are empty. If so, I put a placeholder date in there.

The calculation

The whole point of a custom function is to do some calculation, so do something in this section. You can use any combination of existing FileMaker custom functions, from-scratch calculations or even other custom functions (provided those exist already).

The calculation can be simple or complex. It all depends on your needs, as we’ll discuss in a bit.

Other characteristics

Custom functions are file specific. Each file has zero custom functions at first creation. But it is a simple matter of copying a set of custom functions from one to another file. Simply select the ones to copy in Manage Custom Functions in File A and paste them in the same dialog in File B.
The custom function calculation dialog does not have the type-ahead feature we’ve not gotten used to in other areas. Some folks complain. I don’t. I deal with it and move on. You can select FileMaker functions from the dialog, but it is simpler to type them out.

And finally, custom functions can be used to be recursive, to call itself until a condition is met, and then return the result. We will address these in a later post.

Well-used custom functions

FileMaker custom functions have their uses, and you can make the decision for yourselves. Here are a few good thoughts on it by others. Beyond that, it’s a simple decision to make when considering a CF based on these ideas.

One calc used many times

There is a calculation that does exist in FileMaker that will need to be used throughout the whole custom app. If many places in your app require the date of the first day of any week, then that seems a likely candidate for a CF. This has an added benefit that if the calculation needs to be changed for some reason, (returning the Sunday date instead of Monday date), there’s one place to update all the results. That’s useful.

Human-readable result

You want a result that makes sense to humans: _CF_GetDevice() will return “Mac” instead of 1. Making it readable means you don’t have to even think. Here’s another example of a set of custom functions using the Get(Device) FileMaker function

Each of these returns 1 or 0, true or false for a given device.

These can be used in a series of logic script steps. For example:

One more thing

In the history of the custom functions, many people have discussed the pros and cons about them. Some of those cons have gone away now that we all have access to the feature, but it is worth noting there still are a few. We’ll look at those in a later post.

Resources for Custom Functions:

There are tons of FileMaker custom functions out there for different purposes. Developers come up with CF’s that fit a specific need and then make it available. Here are just a few resources.

FileMaker Custom functions are a part of the full toolbox we all now have at our fingertips. Knowing how they work and when/why you would use them is important. Explore them and see how they fit into your development work.