In the past year and a half here at Geist Interactive, I’ve been tasked with some development work that involves connecting to various APIs. I’ve written modules for clients that get definitions, other forms, synonyms and antonyms, hypernyms and hyponyms of a term from the client’s internal dictionary and such public dictionaries such as Oxford English Dictionary and Wordnik. I’ve connected to Easy Post to schedule a shipment for delivery, buy postage for that delivery, and get a return label. I’ve also connected to Gravity Forms, a plugin of WordPress, to retrieve submitted forms. And I’ve done all this without knowing how I did it. The magic behind my success is our HTTP ({Request}) script, found here. If you need to ‘magically’ connect to APIs and get back the response, read on.

It’s okay to not know

Before I get into explaining how magical this script is, I want to clarify the title: I don’t know how the HTTP ({Request}) script works, and I’m ok with that. The latter part is tough for lots of folks to affirm for themselves. We innovate, solve problems for customers, and generally create something from nothing. So it makes sense that we want to understand how it all works. But that’s not necessary. There’s no reason to know why something works. We only have to be confident that it does work. Just like a custom function or a JavaScript library, we can safely install something that works into our system and worry not a whit about how that happens.

Using the HTTP Request Script

Back to the HTTP ({Request}) script. A few years ago Todd created this script that handles EVERY instance of API connections. It’s a long script and does a lot of things, but it’s only goal is to return the response from an API. It takes some parameters (which we’ll explore shortly) and returns the response. He’s tested it; the script is used every day here at Geist Interactive, and there’s a whole suite of tests to run in case one needs to be convinced.

I started to use it and, at first, I wanted to understand it. I asked Todd. He said: “It doesn’t matter. It just works. Just use it.”

The Insert from URL script step.

I think this script has something to do with the Insert from URL script step. This step is the logical mechanism for getting a response back from an API. Looking at this step, the first few options are fairly straightforward. It’s the “Specify cURL options” option that I don’t know anything about.

I mean, what is “–cookie”? Now I want a cookie.

So this post is not about how the HTTP ({Request}) script works. Rather this post is about how to call the script and relax with the knowledge that ‘it just works’.

The HTTP ({Request}) script has the following documentation in the form of comments at the top of the script

That’s about all the documentation you need to know. Pass to this script an object containing any of the required properties for your service, and this script will handle it. Go ahead and download the file, extract the script, and start implementing into your work.

Read on, however, if you want to see how to set up the parameters for this script.

Set up is Simple

The HTTP ({Request}) script needs an object of parameters for the current API request. You can see that in the above documentation. It needs:

  • url–the address of the API.
  • query–an optional set of query parameters (see below)
  • headers–the set of headers for the API
  • auth.user–optional auth user
  • auth.password–optional password
  • data–the optional data you want to pass to the API (as applicable).

That’s it.


The script will take these parameters in a ‘request’ object and return the response. So we just need to create a series of steps that construct the object to send to the subscript. Here’s some examples:

In this example, I set up the url and the headers (that were specific to this API) in separate script steps. Then they’re combined into one variable.

In this instance, the API wanted the app_id and app_key as part of the headers, so I did not include the auth object in my set up.

Of course there are many ways to construct this. In fact, if you use Generator, that file will generate the scripts for the request.

Here’s another example where I’m passing a query to the API request:

In this example, the query requirements need a nested object. paging|_page_size_| is the set up for paging[page_size] query parameter I can pass to the request.

Since FileMaker itself can’t handle [] inside of a JSON object, the HTTP ({Request}) handles that with the placeholder texts |_ and _|. (You can find this information in the documentation above).

HTTP ({Request}) can handle any need you have. It can handle container field data in the form of base64. Here’s an example:

It can handle multi-part forms.

HTTP ({Request}) can handle pretty much anything. And the beauty of it is that you don’t have to know how it works. I don’t know how it works and yet I can use it all the time.

Handling the Response

The HTTP ({Request}) script takes your parameters and passes them to the Insert from URL step. Finally the script handles the response. It separates out the status, the code, the headers, and the body to reorganize it for your use.

I encourage you give this a try. Copy this one script, HTTP ({ Request }) into your app, set up the parameters, pass the parameters to the script, get the response back and process the result as you need.

There’s a lot of things we need to worry about and understand in FileMaker. Sending a request to an API and getting its response back is not one of them. Use that which has been tested and is used multiple times a day. Use this, don’t give it a second thought, and move on with your task list for the day.

The End

EPILOGUE: Since starting to use the HTTP ({ Request }) script, I’ve had occasion to look through it. I do understand what the script is doing. But that doesn’t matter. It’s all academic. I still just use it. 🙂

The topic of FileMaker tests is important, to me and to clients. I want to make sure my scripts work, and my clients don’t want bugs. For DevCon 2018, I put together a session about this very topic in the hopes of sparking the conversation. The recorded sessions from DevCon are now available online. Here is a link to my session on Building Testable Applications.

Introducing a new topic

In preparing for this talk, I did a fair bit of research on software testing. It became obvious early on that software testing has a lot of depth & breadth, so I wouldn’t have time to cover any other frameworks in the session in one hour. I was also grappling with the fact that the FileMaker community seems to have limited discussion on testing.

I decided to aim my session on talking about what FileMaker tests can do and prove testing’s worth with some live examples. My talk easily could have spanned two hours, so there are a few places where I had to gloss over key topics. I barely touched on modular scripting, the reason to use custom functions, and pre-conditions for testing. We will address these at some point–they play a vital role in building FileMaker tests. My goal was to pique interest in testing in the FileMaker community.

Why test?

The first few minutes were focused on why we should write tests for complex business applications. Any developer that has implemented changes on a complex system already knows the answer: we need to STOP BREAKING CODE!

Fair enough…on to the hard part. How?

Factors to Consider

As you design a testable custom app, you need to consider these factors and use these tools. At first, these ideas might seem foreign, but the more you work with them and engage with them, the easier it will be to work with these in the future.

Designing a system so it CAN be tested

Modular scripting is the vital first step to creating testable applications. If you have a piece of executable logic that should accomplish a specific function, you need to test that one function. However, if you have “spaghetti” code, functions are often times occurring in multiple locations, operating from a different context, or performing multiple discrete pieces of logic in one script. Instead, make your scripts modular and stay DRY. That’s the first step to enabling testing.


Every discreet piece of logic should not only accept a script parameter that is JSON, but it should return JSON as well. Error objects come back if there was a problem and a result object comes back if everything worked. This means you have a modular script that can be given a payload and return results. And JSON is the way to go. Trying to build a complex payload using anything other than JSON introduces you to a world of hurt. You’ll be working with a lot of data and will have to find some way to keep it organized.

Custom Functions

Geist Interactive has released custom functions on Github. We have repositories for handling JSON validation and errors, and analyzing test results. Even if you aren’t adopting testing yet, go get the other custom functions now. It’s amazingly helpful to handle parameters in a meaningful way.

My examples

The following examples contain simple-to-advanced examples of what we mean by testing. Take a look at each one and pick out the above factors in each example.

The simple one

My first example was as simple as anything could be. I had a script that multiplied a number by 2. Amazing, right? This file isn’t about what it does, it’s all about how it does it. If you follow along in the video, you’ll get a good sample of how to separate scripts based on Interface scripts & Controller scripts. The controller script is testable, the interface is just what calls on the logic.

Purchase order sample

I released a purchase order example that is probably the best thing to take from my session. The file is a good dive into a simple solution that’s pretty intuitive. You just need to turn on script debugger and follow the code. You’ll learn in no time how we manage to create records using a controller.

Once you’ve got a grasp on that, open up the test file, and start writing your own tests. I dare say it is the best learning tool so far! I’d recommend imagining customer requests, changing the code, and writing tests. Then change the business logic, write more FileMaker tests, and you’ll get a feel for how testing can (or can’t) solve problems for you. As your solution gets more complex, you’ll need more tests, but it doesn’t really get more difficult. If you can implement a change to your code, and write tests to support it, you’re good!

We practice what we preach

Along with the custom functions and our work on modular FileMaker, we put this testing concept into practice.

Karbon Tests

As most people have heard by now, Geist Interactive released Karbon at DevCon. This is the most complex solution that is publicly available with tests integrated. If you’ve mastered the Purchase Order Example, feel free to give it a shot here as well.

Testing Generator

Writing tests can be monotonous. You are declaring a case, subject, payload, script, and Assert functions for each test. Where code is predictable, we don’t write it, we generate it. I released a generator file as well, that can generate a template for your testable scripts. Feedback welcome!

Long live FileMaker Tests

Since DevCon, I’m a few inquiries about testing. There are a few folks out there that are interested. I hope more folks adopt a testing model for their custom apps.

It might take some time before everyone starts using this, but I’m convinced our community will become more interested in testing over time. One issue is that testing has pre-conditions in your solution, so if scripting isn’t modular, you aren’t ready to start testing. I’ll keep telling myself that’s why I’m not getting questions. Everyone is just re-factoring code to be modular…that’s it!

What’s next

There are some interesting opportunities for test generation. We released Karbon_Generator, which is a developer tool to generate Controller Scripts. Because there are things known about your code, we should be able to produce scripts to test any other scripts. There’s a lot to iron out, but maybe by DevCon next year!

It is always good to know how to get started with any product. And in this video, Barbara Cooney takes us through how to do just that with LedgerLink.

Follow along with Barbara as she helps you get started using LedgerLink:

Key Points

  • Obviously, you need to have a LedgerLink license. We have a demo license to test it out, or you can test it out.
  • You need to enter a  Master Admin account when logging into Quickbooks Online in order to connect LedgerLink.
  • After logging in, you’ll be given a list of your companies. After selecting one of them, that company information is brought into Ledger Link.

Common Mistakes Include

  • Trying to connect to Quickbooks Online with an account that doesn’t have Master Admin privileges.
  • Use Sandbox option. The sandbox comes from Quickbooks Online, and it’s used to test. You might not have a sandbox account, so be sure to uncheck this.

After the initial connection

There are two options for using LedgerLink:

  1. Use LedgerLink to store data as a starter file.
  2. Use LedgerLink to act as a bridge between Quickbooks Online and your custom app. Basically, use LedgerLink for its sync engine.

LedgerLink Configuration

Head to the “Sync Config Log” screen. This is the place to mark what you want to send-to and receive-from Quickbooks Online. All of the entities in LedgerLink can be either sent or gotten or both. Review this section to see which entities apply to your needs and uncheck those that don’t.

Read more information about LedgerLink here. Download the demo and give it a try.

The FileMaker platform keeps getting better. The yearly release schedule gives us plenty to look forward to in new features, and the timeline gives us plenty of time to get to know what has been included. One of the new features is called “Default Fields”. We can now set up default fields to be created with each table in each file. There are plenty of articles out there already, and the knowledge-base article is out now. But I’ve seen some questions in the forums, sometimes the same ones. I did some research and put together the answers here. Let’s dive into FileMaker Default Fields.

Where are the Default Fields?

FileMaker 17 includes the default defaultFields.xml file in this location:

  • Mac: Applications/FileMaker Pro 17 Advanced/FileMaker
  • Win: <drive>:\Program Files\FileMaker\FileMaker Pro 17 Advanced\Extensions\English

This is just the place to get the defaultFields.xml file. You shouldn’t move it. Just copy it and paste it to the proper location:

  • Mac: /Users/Shared/FileMaker/Shared/
  • Win: <drive>:\ProgramData\FileMaker\Shared\

Once there, you can open this file in a text editor (I use Visual Studio Code) and edit away.

How do we change the FileMaker default fields?

The defaultFields.xml file provided (and found in the location and placed in the location described above) allows us to make changes to the default fields. Here are some thoughts about it:

  • Start by making only a few changes to a field or two. Edit, save and see what happens when you create a new table. Get a feel for how the XML is written.
  • There’s a tag in the xml, near the bottom in the <tagList> tag, that identifies one of the fields as the primary key. Make sure that stays there if you plan on using addons in FileMaker 17 or any later version.

  • There’s a tag in the xml that identifies each of these fields as utility fields. It is found in the <TagList> tag: “#_FMI_0”. The 0 identifies it as utility. Remove it if you wish, but understand the (small) consequence of doing so (see below).
  • As you go to adjust a field’s xml, you can get immediate feedback by creating a new table in a .fmp12 file. If there’s any issue with the xml of a field, such as a missing ” or misspelled tag, that field will not be created. So give it a test.
  • If you’d rather not use FileMaker default fields, simply add a blank defaultfields.xml file to that shared location. This action prevents FileMaker default fields from being created.

How will default fields effect my development?

Using the FileMaker default fields can enhance our development in a small, but crucial way: You’ll create a table and immediately begin adding data fields to your system since these utility fields were automatically generated. Additionally, the following points illustrate more benefits:

  • We can add other utility fields by adjusting the defaultFields.xml file.
  • We can set this defaultFields.xml file up once and be done.
  • Every table will have common utility fields which record the basic necessities: primary keys, creation & modification user & timestamp.
  • We’ll never have to copy/paste the utility fields from one table to another. One less thing to worry about. I’m sure folks have forgotten to add these at one time or another.
  • Since these are utility fields, they do not get added to the layout automatically (as do fields that are created by you during the creation of the table) .
  • Utility field-creation is consistent across an entire team.
  • A field is identified as the primary key field which is useful for add-ons.

How do I add a certain type of field?

Many folks have asked in the forums how to create a certain kind of field. An amazing tool, by Salvatore Colangelo at Goya, allows you to create the fields you want for each table by constructing the defaultFields.xml file. The FileMaker file actually writes the xml of the fields after we make choices (name, type, primary key, calculation, etc).

Of course you can simply duplicate the xml for a field in the document and adjust it as necessary. By hand. That’s useful too.

But consider a very key point:

Every field in the defaultFields.xml file will be created in every table of every file you work with in FileMaker 17. So if we add a “FirstName” field to our defaultFields.xml file, that field will get created even in the Invoices or Dashboard or Inventory table we create.

Only a few

It seems to me there’s only a few fields that could always be a part of the XML file. We here at Geist Interactive want some fields in every table. Some other developers have more, some less. Here’s ours.

  • primaryKey // the primary key of the table. UUIDNumber
  • ModificationUser //autoEnter field
  • ModificationTimeStamp // autoEnter field
  • CreationUser //autoEnter field
  • CreationTimeStamp //autoEnter field
  • z_One_c  // a calculation field that returns a 1. Useful for relationships.
  • ModCount // a calculation field that counts the number of times each record has been modified

Remember, these are utility fields, so they server a utility and are not data fields as it pertains to the custom app.

New Feature, New Thinking

The FileMaker Default Fields feature is new to FileMaker 17 and it provides some possible glimpse into the future. We, as developers, can create FileMaker fields using XML, either by hand or using a tool. That’s an interesting thought.

You’re already using them if you’re developing in FileMaker Pro 17 Advanced, so embrace them and make them your own and be happy. They are here to stay.




FileMaker 17 introduces a new script step: Perform Script by Name.  FileMaker Devs have been asking for this feature for a long time. It sounds like a useful idea, but we should probably try to understand how it works before we just adopt it willy-nilly, for all our script calling needs.  Let’s explore this idea together.

Perform Script by Name: The basics

Perform Script By Name is a script step that functions as it is named. We now have two options when using the Perform Script step: By Name  & From list.

If “From list” is chosen, then we go about choosing a script in the normal way. However, if “By name” is selected, we get to set the name of the script using the calculation dialog.

At the script’s runtime, this step checks to see if the script is in the list of scripts and runs that one. If it isn’t in the list, then we get the good ol’ message:

We can suppress the message with the Set Error Capture [on] step and capture the error (104 Script is missing) with the Get (LastError) function.

In a multi-file solution, we can use this same step to call a script in another file using the same table/field format: ExternalFileRefName::ScriptName.

Notice this step is using the external file reference name, not the actual file name. We need to use the name we gave to the file reference.

Pretty simple. There doesn’t seem to be much else to this. Set the name of the script or external file reference name and script in some manner, and this step will run that script.

Oh, by the way, this works for Perform Script on Server as well. As we set up a script to run there, we are presented with the same dialog.

Wait, what? Why is this useful?

Here in FileMaker Pro 17 Advanced, we have the option to set a script name and run that script, a step that belongs in the ‘by name’ group of steps: Set Field By Name, Go to Layout by Name, and this one. When I first saw it, it struck me as odd: why would I ever use this step that adds another source of indirection to my file and is fragile. Let’s consider its possible uses and the idea that this step is fragile to see if we can find some workarounds.

It is useful after all

The option to perform script by name is actually useful. It seems to get rid of complex If / Else if / End if logic steps that determine the script to run. Maybe in a custom app, there’s a script called “Edit” and this script is run from a student record and a teacher record. The first part of this Edit script would have to use the If logic to determine which edit script to run, something like

Quite a few script steps.

With this new option, we can reduce the steps:

This does seem useful. It would certainly reduce the number of steps needed to determine the script. In a recent project, I saw one navigation script that ran for the entire system. IT was literally 150 lines long with a bunch of Else If logic steps. It was tough to debug. So there’s that. This option is useful.

Notice in the example above, I’m setting the name of the script into a variable and then passing that variable into the Perform Script step. That seems useful and handy.

Since the name of the script is a calculation dialog, I have a plethora of ways to get the script. I can construct it like shown above. I could get it from some preference table, or I could pass in the script name via a parameter, to name a few.

This new option in Perform Script allows us to write some pretty complex logic in a simpler way. If reducing the number of lines of code is your goal, Perform Script By Name is a must-use.

“Fra-gee-lay. It must be Italian”

Any developer’s first thought about this script step is that it can be easily broken. If my script name today is “ThatScript” and for some reason (legit or not) I decide to change it to “That Script”, then my perform script by name step is broken. That is certainly true.

Another factor to consider is that Perform Script By Name is another source of indirection, one that our Realtime Developer Intelligence tool FMPerception can find and point out, but this is an indirect source. Every time I use Perform Script by Name [“ThatScript”] I reduce the number of times FMPerception or database analysis tools can identify in the DDR where and when that script is used. That’s not bad. It is just something to consider.

But just because something is fragile or introduces indirection, there’s no reason to avoid this script step option. There are possible workarounds to the fragility issue. Let’s take a look at some of those.


Don’t change names

The first ‘workaround’ I can think of is: don’t change your script names. It seems obvious, but also almost unthinkable. FIleMaker is a rapid-application development tool. We can create a custom app quickly and easily, and that includes changing things on the fly. If our fieldName or script or layout’s original name is now unsatisfactory to us, it is easy to change. FileMaker after all actually looks at the ID of fields, layouts, scripts when calling or going to or setting. We’ve had it easy: change names as much as we want. I’d argue that this is a sign of not-fully-thought out planning, but I make no judgements. It happens. But I’d encourage us to change names of things as little as possible.


Second, if we do start using a script in the ‘by name’ option of Perform Script, we need to very clearly document that. In the above pictures, my script “ThatScript” is being used in the ‘by name’ option. So I should go over to “ThatScript” and document at the top “###### USED IN PERFORM SCRIPT BY NAME. DO NOT CHANGE THE NAME ######” or something along those lines to inform everyone, including your future self, to tread carefully with this script. Or put these called scripts in a “BY NAME” folder. Something to alert folks to its special case.

Generate script names

Third, we can generate the name of the scripts that will be used in the following parent script. 

At the top of my Edit_Record script, I have a code block that gathers the names of the scripts that could be used in this parent script. I first set a global variable to 1. Then I go into each of the scripts that could be used.

Inside each script, at the top, I immediately exit with the script name.

and then collect that name into a JSON object, in my case, in $scripts.

This works. It adds complexity to your parent script, but it does ensure that I get the correct script name each time.

Other possibilities

There’s more ways to ensure that you have the correct script names every time. There’s a good function I just discovered: ScriptNames(). It returns a list of the scripts in a file. That could be useful somehow. Maybe you could gather all the scripts into a global and then check against this list every time using Filter().

Don’t be afraid

This new option is useful and valuable in the right circumstances. I’d say we shouldn’t be afraid of it or avoid it. Use it where appropriate. But don’t go overboard. Don’t use it like we all started using ExecuteSQL() when it came out (that is, everywhere).

Be deliberate in its use. Document the heck out of scripts that are used in this “by name” manner, and be careful about refactoring script names for no good reason.

I look forward to using this to simplify complex logic. Give it a try, and let us know how you use the new Perform Script by Name step.



FileMaker Pro Advanced gives us developers to create our custom function, our own calculation that returns a result. These are FileMaker Custom Functions. Using custom functions is a boss-level technique, and we all now get to use them and share them. With great power comes great thinking: you as a developer get to decide when/how to use them. But first, let’s take a look at how to work with custom functions.

FileMaker custom functions

Custom functions are lines of code that we write that returns a simple or complex result. They are found in the “File / Manage Custom Functions” menu.

FIleMaker Custom Function dialog

Here is where you create a custom function

Custom Functions have a few characteristics to them:

  • They require a name.
  • They can accept parameters, but don’t have to.
  • Some sort of calculation is required.
  • They are file specific
  • At this time, the custom function dialog is not type-ahead enabled, as is a regular calc dialog.
  • They can return one result or a recursive result.

Let’s look at each of these.

Require a name

Once a name is given, the custom function is available in both the functions list (under custom functions) or as part of the typeahead. Some folks clearly identify a custom function in its name with some prefix. I tend to use “_” at the beginning: “_CF_BeginningOfWeek”, (a function that returns the date of the first day of a week) but the structure can be anything, even containing spaces between words.

Accept parameters

Parameters are bits of information we pass into the custom function and is used in the calculation. These parameters are in the ( ) that appears after the name.. For example in the “_CF_BeginningOfWeek ( anyDate)” custom function, I am passing in a date. I’ve named it “anyDate” but that’s what it could be. I could work with any of these:

_CF_BeginningOfWeek ( Get ( CurrentDate ) )
_CF_BeginningOfWeek ( Date ( 3 ; 19 ;2018 ) )
_CF_BeginningOfWeek ( “5/8/2018” )

What’s in the parentheses gets passed into the custom function and is used throughout. For example, in this custom function, Date ( 3; 19; 2018) is passed into the calculation, and this value is assigned to the variable “anyDate”, the one I put in the CF set up.

Let ([
 _dow = DayOfWeek ( anyDate ); //anyDate = 3/19/2018
 _first = _dow - (_dow - 1);
 _end = ""
GetAsDate (_date - _first)

A CF can contain any number of parameters, including zero. Many folks create custom functions that turn the numerical returned value from a FileMaker function into something readable. The FileMaker function Get ( Device ) returns 1 if the user is on the mac. Would you remember 1 being the Mac (especially is your device of choice is a machine with the Windows OS)?

So folks will take this function into a custom function and call the custom function instead.

One other note about parameters, you can use any number of parameters, and each one must contain something passed to it. Look at this custom function called _IsOverlap. Its purpose is to determine if two date ranges overlap.

As I built this for a project, I realized that sometimes there is no end date for one of the ranges. I still need this to work, so I have to pass something into the custom function for the end dates. So inside the custom function, I’m checking to see if the EndDate parameters are empty. If so, I put a placeholder date in there.

The calculation

The whole point of a custom function is to do some calculation, so do something in this section. You can use any combination of existing FileMaker custom functions, from-scratch calculations or even other custom functions (provided those exist already).

The calculation can be simple or complex. It all depends on your needs, as we’ll discuss in a bit.

Other characteristics

Custom functions are file specific. Each file has zero custom functions at first creation. But it is a simple matter of copying a set of custom functions from one to another file. Simply select the ones to copy in Manage Custom Functions in File A and paste them in the same dialog in File B.
The custom function calculation dialog does not have the type-ahead feature we’ve not gotten used to in other areas. Some folks complain. I don’t. I deal with it and move on. You can select FileMaker functions from the dialog, but it is simpler to type them out.

And finally, custom functions can be used to be recursive, to call itself until a condition is met, and then return the result. We will address these in a later post.

Well-used custom functions

FileMaker custom functions have their uses, and you can make the decision for yourselves. Here are a few good thoughts on it by others. Beyond that, it’s a simple decision to make when considering a CF based on these ideas.

One calc used many times

There is a calculation that does exist in FileMaker that will need to be used throughout the whole custom app. If many places in your app require the date of the first day of any week, then that seems a likely candidate for a CF. This has an added benefit that if the calculation needs to be changed for some reason, (returning the Sunday date instead of Monday date), there’s one place to update all the results. That’s useful.

Human-readable result

You want a result that makes sense to humans: _CF_GetDevice() will return “Mac” instead of 1. Making it readable means you don’t have to even think. Here’s another example of a set of custom functions using the Get(Device) FileMaker function

Each of these returns 1 or 0, true or false for a given device.

These can be used in a series of logic script steps. For example:

One more thing

In the history of the custom functions, many people have discussed the pros and cons about them. Some of those cons have gone away now that we all have access to the feature, but it is worth noting there still are a few. We’ll look at those in a later post.

Resources for Custom Functions:

There are tons of FileMaker custom functions out there for different purposes. Developers come up with CF’s that fit a specific need and then make it available. Here are just a few resources.

FileMaker Custom functions are a part of the full toolbox we all now have at our fingertips. Knowing how they work and when/why you would use them is important. Explore them and see how they fit into your development work.

I think it is time we stopped getting too worked up about product version numbers. FileMaker is not a Product. It is a Platform. Platforms are different. They don’t really have a product release every 2 to 4 years. They have regular releases of whatever is ready to go.  That’s why I don’t care much about the number 17.  It has great new features and capabilities that once again improve how we approach how we build apps, just like last years big game changer. Let’s take a closer look, shall we?

Missing the Point

I see a lot of bitching about how FileMaker 17 should be FileMaker 16.5 because it doesn’t have enough new features for end users to get a new version number. Frankly, those forum posts are missing the point. First, this release does have several game-changing features and capabilities.  Second, as I just said FileMaker is a platform, not a product. If you know the difference between the two, all this kvetching over a given release and its number seem pointless. We get a new version every year. It always includes new great stuff. The next batch of new stuff is about a year away.

FileMaker 17 Features I won’t be Talking About

Most of them :-). We’ll have other blog posts that will go into more detail on each of the new features and how they work.  In this post I want to focus on just a couple that I think have major implications for the platform and not necessarily because of what they do specifically in your apps, but because of how they might affect how we approach building high-value FileMaker apps for our organizations and customers.

The End Of a Two-Class System

FileMaker 17 Pro Advanced is now the only Desktop client. There is no more FileMaker Pro. That means that everyone has access to the advanced tools:  custom functions, the DDR, custom menus etc. Everyone can copy and paste the code. Everyone can take advantage of RealTime Developer Intelligence Tools like FMPerception and Analysis Tools like Base Elements and Inspector Pro.

There are no more “haves” and “have-nots”. Just “haves”

This means that product developers can no drop the ridiculous work-a-rounds to handle the fact that most of their customers wouldn’t be able to copy custom functions into their files.

Modular FileMaker 2.0 Guidelines when they are released at FileMaker DevCon 2018 will drop the guideline about not using Custom Functions completely. I was famous for advising against Custom Functions in the past because it decreased the likely hood that code could be re-used by other people, who didn’t have advanced. Since it was focused on sharing code, Modular FileMaker 1.0 Guidelines suggested not using them. While I have moderated that position over the years somewhat, but now we can embrace them fully.

Since everyone is on an equal playing field now it will be easier to teach people to share code and build products that everyone can use.

Less UI Hacks, More Business Logic.

I spent years developing workarounds to UI limitations in FileMaker. Back when we got one release every 30 months and they NEVER included new UI widgets and patterns, this might have made sense, but FileMaker 17’s Master-Detail feature has convinced me that is no longer a good use of time.

Over the last couple of years, FileMaker has added major new UI features that change the way you might develop your interface. We have Popovers, button bars, Top Nav, Card Windows, just to name a few. With this release, we get Master-Detail. It no longer makes sense to waste cycles building up massive leaky abstractions like my old version of MasterDetail.

Instead, I think we should focus on the parts of our system that aren’t going to change as much, the data layer and the logic layer. Maybe design systems so you could rebuild the UI at any time, without having to rebuild the data layer and the business logic.

Data Migrations

The fmDatamigration tool is a game changer. Using our soon to be released Otto product, we have fully automated multi-file GB data migrations going from Development servers to Production servers in just minutes. That is not a typo. GB data migrations from server to server in just minutes! The implications of this are massive and wide reaching.

You may still choose to separate your solution into multiple files, there are many reasons to do so. But avoiding data imports no longer needs to be the reason to do it. Becuase you are free from worrying about data imports, you can find different ways of separating that make more sense for your scenario. You might want to make some features more of a self-contained module that can be maintained separately. Or you may shover everything back into one file. It’s up to you.

Live development on production servers has always been frowned upon.  But it was almost a necessity because some systems could take hours or even days to go through the data migration.  That excuse is now gone.  If you run a busy complex FileMaker solution, you should be doing development on a development server, and doing regular automatic migrations.

The Data API is Out of Beta

The Data API is now out of beta and includes a tiered pricing model that fits nicely into the new simpler overall licensing model.  Finally, we have a pricing model that makes sense.

Developers can feel confident about building on top of the new Data API, because it is official, and generates revenue for FileMaker.  I know some folks wanted it to be free so they could get around FileMaker License costs, but really that is a very shorted sighted view.  If you rely on the FileMaker platform you should want the vendor (FMI) to thrive.

Continuous Improvement

This year’s release ( notice how I didn’t say “17” ) includes a number of compelling improvements both to the end user experience and the developer experience. It is another step in a continuous process of improvement. Each year it gets better and better, and we would do well to work that fact into our plans.