Business Central on Linux? Here, hold my beer!

Sorry if this one is a bit long, but think of it as much of a brain dump. I’ve been asked repeatedly how I managed to get Business Central running on Linux using Wine, so here’s the full’ish story.

Business Central doesn’t run on Linux. Everyone knows this. Microsoft built it for Windows, and that’s that.

So naturally, I had to try.

What started as curiosity turned into months of reverse engineering, debugging Wine internals, and learning more about Windows APIs than I ever wanted to know. Stefan Maron and I presented the results at Directions EMEA, and people kept asking for a proper write-up. Here it is. (Ohh no, already over promised)

Why Even Try?

Because nobody had done it before. That’s it. That’s the reason.
Well, Microsoft probably have, but they have the source code. So they select Linux and comment out the functionality that won’t work. But that cheating isn’t available to the rest of us.

The cost savings and performance benefits we discovered later were a nice bonus. Windows runners on GitHub Actions cost twice as much as Linux runners. Builds run faster. Container startup is dramatically quicker. But none of that was the original motivation.

Sometimes you do things just to see if they’re possible.

The Native .NET Attempt

My first approach was simple. BC runs on .NET Core now, and .NET Core runs on Linux. Problem solved, right?

Not even close.

I copied the BC service tier files to a Linux machine and tried to start them. Immediately, it crashed.

The moment you try to start the BC service tier on Linux, it crashes while looking for Windows APIs. The code makes assumptions everywhere. It wants to know which Active Directory domain you’re in, I think it is to make the complete Webservice URLs. It assumes Windows authentication is available. These aren’t just preference checks that fail gracefully. The code is designed for a Windows environment.

I spent a few evenings trying different things, but it became clear this wasn’t going to work. BC has Windows baked into its DNA. So I had to try something else.

Enter Wine

If you can’t make the code Linux native, maybe you can make Linux look enough like Windows. That’s what Wine does. It’s a compatibility layer that translates Windows API calls to Linux equivalents.

Wine has been around forever. It runs thousands of applications. Mostly games and desktop software. Heck, Proton, which Steam uses to run Windows games on Linux, is based on Wine. The keyword there is “mostly.” When I checked Wine’s compatibility database, there were maybe 50 server applications listed, and 48 of them were game servers. And that was out of over 16,000 supported programs.

Server software is a different beast. It uses APIs that desktop applications never touch. HTTP.sys for web serving. Advanced authentication protocols. Service management. Wine’s developers understandably focused on what most people actually use.

But Wine is open source. If something is missing, you can add it. Well, if you can write C, which I last did in university more than 20 years ago. But I have something better than C skills: debugging skills, and a stubborn refusal to give up. Well, energy drinks, and AI. Lots of AI.

The Debug Loop

My approach was brute force. Start the BC service tier under Wine with full debug logging enabled. Watch it crash. Find out which API call failed. Implement or fix that API in Wine. Repeat.

The first crash came immediately. Some localisation API wasn’t returning what BC expected. Easy fix. Then the next crash. And the next.

I kept two resources open at all times: Microsoft’s official API documentation and a decompiler targeting BC’s assemblies. The docs told me what an API was supposed to do. The decompiled code told me exactly how BC was using it. Just a matter of connecting the dots.

Some APIs were straightforward translations. Others required understanding subtle Windows behaviours that aren’t documented anywhere. Why does this particular call return data in this specific format? Because some Windows component, somewhere, expects it that way, and BC inherited that expectation.

Plus, it didn’t help that the Microsoft documentation is often incomplete and just includes placeholder info for some parameters and return values.

I even had to program my own Event Log because that Wine doesn’t have one. So the entire task was just as much a tooling test as a programming one. I created loads of scripts to iterate over and filter out just the logs I needed.

Getting It to Start

Before the service could even stay running, several hurdles arose that had nothing to do with Wine’s API coverage.

SQL Server encryption was an early roadblock. But not because it didn’t work, it was just a hassle to setup. BC insists on encrypted database connections, but the PowerShell cmdlets that normally configure certificates and connection strings don’t run on Linux. I had to reverse engineer what the cmdlets actually do and replicate each step manually.

The same problem hit user management. New-NavServerUser flat out refuses to work without Windows authentication. The cmdlet checks for valid Windows credentials before it does anything else. No Windows, no user creation.

My solution was pragmatic: bypass the cmdlets entirely. I wrote code that injects NavUserPassword users directly into the SQL database. BC stores passwords hashed exactly 100,001 times. Yes, that specific number. Finding that took longer than I’d like to admit.

Kerberos support in Wine was incomplete for the authentication modes BC wanted. Specifically, the SP800-108 CTR HMAC algorithm wasn’t implemented in Wine’s bcrypt. BC uses this for certain key derivation operations, so I had to add it. Again, it was just a matter of seeing in the logs what BC expected and making Wine do that.

When It “Worked”

After a week of this, something happened. The service started. It stayed running. I called an OData endpoint and got… HTTP 200. Success! Sort of.

The response body was empty. And after that first request, the service froze completely.

What was going on?

The HTTP.sys Rabbit Hole

BC uses Windows’ kernel-mode HTTP server (HTTP.sys) for its web endpoints. Wine had a partial implementation, but “partial” is generous. Looking at the httpapi.spec file, I counted 13 functions that were just stubs: HttpWaitForDisconnect, HttpWaitForDisconnectEx, HttpCancelHttpRequest, HttpShutdownRequestQueue, HttpControlService, HttpFlushResponseCache, HttpGetCounters, HttpQueryRequestQueueProperty, HttpQueryServerSessionProperty, HttpQueryUrlGroupProperty, HttpReadFragmentFromCache, HttpAddFragmentToCache, and HttpIsFeatureSupported didn’t even exist.

Wine’s HTTP.sys could accept connections and start processing requests. It just couldn’t reply with a body payload, finish them properly or clean up afterwards. The server literally didn’t know how to release a connection once it was established. That’s why it froze after the first request.

I had to implement actual connection lifecycle management: the IOCTL handlers for waiting on disconnects, cancelling requests, properly sending response bodies with the is_body and more_data flags. Server software needs to close connections cleanly. Games don’t care about that or they used different APIs.

I also had to resort to extensive Wireshark tracing to see what BC expected at the network level. Once I saw the raw HTTP traffic, it was easier to identify the missing pieces. So I compared traffic from a Windows BC instance to a Wine one and identified what was missing or malformed. Then went back to the Wine code and fixed it.

Actually Working

Once the HTTP.sys fixes were in, responses actually came back with content. The freezing stopped.

That first real API response with actual data felt like winning the lottery. Until I noticed the response was always the same. As I had just put a fixed response in the handler to test things. Took me an hour to realise I was looking at my own test code’s output, not BC’s.

Once I removed my test code and let BC handle the responses properly, it actually worked. The web client isn’t functional yet, but that wasn’t the main goal. The core is there: compile extensions, publish apps, run tests. That’s what I was after. Heck, a Hello World which showed the code ran on Linux was enough for me at that point.

Directions EMEA 2025 Presentation

Last year, before Directions EMEA, Stefan Maron reached out. He had heard about my Wine experiments and wanted to collaborate on a presentation. We teamed up and put together a talk showing the journey, the technical details, and a live demo. Well, we skipped the live demo part since doing live demos of experimental software is a recipe for disaster.

Once I had something functional, Stefan and I measured it properly. Same test app, same pipeline, Windows versus Linux.

The first working build: Linux finished in 13.4 minutes versus 18.4 minutes on Windows. That’s 27% faster out of the gate. Not bad, not bad at all.

After optimisation (smaller base image, certain folders in RAM, no disk cleanup overhead), Linux dropped to 6.3 minutes. Windows stayed around 18 minutes. 65% faster. But all this was on GitHub’s hosted runners, what if we could optimize further?

With caching on a self-hosted runner: 2 minutes 39 seconds total. At that point, we’d shifted the bottleneck from infrastructure to BC itself. Just pure service startup time, waiting for metadata to load was the limiting factor.

The container setup phase showed the biggest difference. Wine plus our minimal Linux image pulled and started in about 5 minutes. The Windows container took nearly 16 minutes for the same operation.

What Didn’t Work

The web client doesn’t work yet. I haven’t put much effort into it since it wasn’t the main goal. Last time I tried I had the web server running, but the NST and the web service just wouldnt talk to each other. Stopped there as Directions was coming up and I wanted to focus on the service tier.

The management endpoints don’t function. We had to write custom AL code to run tests via OData instead.

Some extensions that use uncommon .NET APIs crash immediately. If your extension does something exotic with Windows interop, it won’t work here.

What’s Next

This was always a proof of concept. The goal was to answer “can it be done?” and the answer is yes, with caveats.

Big disclaimer: This is purely a “see if I could” project. It’s not ready for production use, and I wouldn’t even recommend it for automated testing pipelines in its current state. It’s an experiment.

The code is up on GitHub.
Mine
BC4Ubuntu is my first try. Don’t use it, as it is messy and unoptimized.
wine64-bc4ubuntu has the custom Wine build.

Stefans
BCOnLinuxBase is the optimised base image.
BCDevOnLinux is the actual Dockerfile for BC. This is the one to use. But be careful, with great power comes great responsibility.

I’ve also got the NST running on ARM hardware. Getting SQL Server to work on ARM is an entirely different project for another time.

Would I run production on this? Absolutely not. But that was never the point.

Sometimes you learn the most by doing things the “wrong” way. But it was a fun ride.

And can you keep a secret? More than 98% of the code was written by AI. If I had done it today, the last 2% would have been included as well.


Stefan Maron contributed significantly to the pipeline work. This was very much a joint effort.

Native AL Language Server Support in Claude Code

If you’re using Claude Code for Business Central development, you’ve probably noticed that while it’s great at writing AL code, it doesn’t truly understand your project structure. It can’t jump to definitions, find references, or see how your objects relate to each other.

Until now.

I’ve built native AL Language Server Protocol (LSP) integration for Claude Code. This means Claude now has the same code intelligence that VS Code has: symbol awareness, navigation, and structural understanding of your AL codebase.

Wait, didn’t you already do this?

Yes! A few months ago I contributed AL language support to Serena MCP, which brought symbol-aware code editing to Business Central development. Serena works with any MCP-compatible agent: Claude Desktop, Cursor, Cline, and others.

This native Claude Code integration is different. Instead of going through MCP, it hooks directly into Claude Code’s built-in language server support. The result is a more polished, seamless experience specifically for Claude Code users.

Serena MCP: Universal, works everywhere, requires MCP setup
Native LSP: Claude Code only, tighter integration, zero-config once installed

If you’re using Claude Code as your primary tool, the native integration is the way to go. If you switch between different AI coding assistants, Serena gives you AL support across all of them.

What is this?

The AL Language Server is the engine behind VS Code’s AL extension. It’s what powers “Go to Definition”, “Find All References”, symbol search, and all the other navigation features you use daily.

By integrating this directly into Claude Code, the AI assistant now has access to:

  • Document symbols: all tables, codeunits, pages, fields, procedures in a file
  • Workspace symbols: search across your entire project
  • Go to Definition: jump to where something is defined
  • Go to Implementation: jump to implementations
  • Find References: see everywhere something is used
  • Hover information: type information and documentation
  • Call hierarchy: see what calls what, incoming and outgoing
  • Multi-project support: workspaces with multiple AL apps work fully

This isn’t regex pattern matching. This is the actual Microsoft AL compiler understanding your code.

Why does this matter?

Without LSP, Claude Code treats your AL files as plain text. It can read them, but it doesn’t understand the relationships between objects. Ask it to “find all places where Customer.”No.” is used” and it has to grep through files hoping to find matches.

With LSP, Claude can ask the language server directly. It knows that Customer is a table, that "No." is a field of type Code[20], and it can find every reference instantly.

The difference is like asking someone to find a book in a library by reading every page versus using the catalog system.

Real example

Here’s what Claude Code can do with LSP on a Customer table:

Go To Definition - On CustomerType enum reference at line 77:
→ Defined in CustomerType.Enum.al:1:12

Hover - Same position shows type info:
Enum CustomerType

Document Symbols - Full symbol tree for Customer.Table.al:
Table 50000 "TEST Customer" (Class) - Line 1
  fields (Class) - Line 6
    "No.": Code[20] (Field) - Line 8
      OnValidate() (Function) - Line 13
    Name: Text[100] (Field) - Line 22
    "Customer Type": Enum 50000 CustomerType (Field) - Line 77
    Balance: Decimal (Field) - Line 83
    ...
  keys (Class) - Line 131
    Key PK: "No." (Key) - Line 133
    ...
  OnInsert() (Function) - Line 158
  OnModify() (Function) - Line 168
  UpdateSearchName() (Function) - Line 190
  CheckCreditLimit() (Function) - Line 195
  GetDisplayName(): Text (Function) - Line 206

Every field with its type. Every key with its composition. Every procedure with its line number. Claude can now navigate your code like a developer would.

Requirements

  • Claude Code 2.1.0 or later. Earlier versions have a bug that prevents built-in LSPs from working.
  • VS Code with AL Language extension. The plugin uses Microsoft’s AL Language Server from your VS Code installation.
  • Python 3.10+ in your PATH
  • A Business Central project with standard AL project structure and app.json

Installation

Step 1: Enable LSP Tool

Set the environment variable before starting Claude Code. This is because even as LSPs are now supported, I think they are not production-ready in all instances, hence the active activation:

# PowerShell (current session)
$env:ENABLE_LSP_TOOL = "1"
claude

# PowerShell (permanent)
[Environment]::SetEnvironmentVariable("ENABLE_LSP_TOOL", "1", "User")
# Bash
export ENABLE_LSP_TOOL=1
claude

Step 2: Install the Plugin

  1. Run claude
  2. /plugin marketplace add SShadowS/claude-code-lsps
  3. Type /plugins
  4. Tab to Marketplaces
  5. Select claude-code-lsps
  6. Browse plugins
  7. Select al-language-server-python with spacebar
  8. Press “i” to install
  9. Restart Claude Code

That’s it. The plugin automatically finds the newest AL extension version in your VS Code extensions folder.

Repository: github.com/SShadowS/claude-code-lsps

What’s next?

The current wrapper is Python-based. A few things I’m looking at:

  • Go-compiled binaries for faster startup and no runtime dependencies
  • Better error handling for more graceful recovery when the language server hiccups
  • Testing on more setups with different VS Code versions and extension configurations

Try it out and feedback

If you’re doing BC development with Claude Code, give this a try. The difference in code navigation and understanding should be significant.

I’d love to hear your feedback. What works, what doesn’t.

If you make an issue on Github please add the %TEMP%/al-lsp-wrapper.log as it helps me alot during debugging. This file will be disabled in a few weeks, just need it here in the beginning.

Repository: github.com/SShadowS/claude-code-lsps


This is part of my ongoing work on AI tooling for Business Central development. See also: CentralGauge for benchmarking LLMs on AL code, and my MCP servers for BC integration.

Lazy replication of tables with NodeJS, Custom APIs and Webhooks in Business Central (Part 1)

“What if I could replicate a table to a system in an entirely different database with a different language in an entirely different OS evenly?”

Wondered what I could do with webhooks which wasn’t just a standard use case. This post isn’t a how-to for creating custom API pages in AL or how webhooks work, other people have done this and they are way better at explaining it than me.

The flow I wanted:

  1. Add generic Business Central AL extension, which exposes the installed APIs and the Field table. This is the main, it will not expose any data tables itself, but will be used to handle the communication.
  2. Add extension with custom API pages for the tables needing replication. (In this example it will contain the table also, but normally not.)
  3. Server (called CopyCat) will call the main extension (ALleyCat) and get a list of all API pages within a specific APIGroup (BadSector) and their table structure.
  4. CopyCat will connect to a secondary database and create tables.
  5. CopyCat will copy all records.
  6. CopyCat will subscribe via webhooks for further changes.
  7. Each webhook response is just a record ID and the change type, ex. Updated or Deleted. So CopyCat will either request new/updated record or delete if needed.
    Will keep a list of new or modified records in an array and only request a predefined records pr. sec, so the main service-tier won’t be overloaded.
  8. Periodic requests for table changes in BC and if new fields are detected, they are added to the table.
Continue reading “Lazy replication of tables with NodeJS, Custom APIs and Webhooks in Business Central (Part 1)”

Custom DotNet assemblies in AL

Just making the last touch-ups for the next posts, when I noticed that there is not that many examples on how to use custom assemblies in AL code.

With the release of the full version of Business Central the 1. of October, some info on this topic is more and more relevant. So this is just a quick post based on my MQTT client example.

This is basically just the same code, though some things have been cut out, just to slimline it a bit. Everything related to sending have been removed.

I used txt2al and renamed a few of my vars, had a few overlapping new keywords in AL.

Why I didn’t make it from scratch in AL? Two reasons. First, Txt2al is just a great tool and it works almost flawlessly. Secondly, VS Code is a terrible tool to begin writing AL code which contains DotNet. There is almost no help what-so-ever when using DotNet. So until this is fixed I would recommend you to use C/Side and then convert the code afterwards.

The following have been shown elsewhere, but just want to reiterate it. To use custom assemblies you need to specify what directory they are in and then define them afterwards.

Like here I have mine in a subdir, called .netpackages, of my project folder.

This is then referred to in the .vscode\settings.json file like this.

With this over it is now possible to use the assemblies in the project, but first we need to refer to them like this.

This is where C/SIDE comes handy, as these lines are not possible to find in VS Code. You just like have to know what to write, not impossible, just really annoying.
The same can be said about DotNet triggers, you just have to know them, as there is no autocomplete.
So you can see DotNet development isn’t really implemented in AL yet.

E.g. it is expected that you just know the trigger name and all its parameters. Like the first line in the code below. No help.

Summary:
So if you want to do DotNet in AL, just add the variables and triggers in C/SIDE and then do a txt2al. You will thank yourself afterwards.

The entire project can be found here https://github.com/SShadowS/AL-MQTT-example