MLX - Day 3: Running the MLX sample app on macOS, iOS and visionOS

April 25, 20245 min read#swift, #ios, #mlx, #llm

In the previous article, we could install and run a sample program using the Swift APIs from the MLX library. Those API provides low level functionalities to create, train and evaluate machine learning models.

For iOS developers, our primary use case would be using the pre-trained models to provide useful features to our users.

In this article, we will run some sample apps from the mlx-swift-examples to exlore how to use mlx-swift to create apps for various Apple platforms such as iOS, macOS and visionOS.

MLX Swift Examples

The MLX Swift Examples repository provides some example projects using the mlx-swift.

As of the writing of this article, these are the available examples:

  • MNISTTrainer: An example that runs on both iOS and macOS that downloads MNIST training data and trains a LeNet.
  • LLMEval: An example that runs on both iOS and macOS that downloads an LLM and tokenizer from Hugging Face and and generates text from a given prompt.
  • LinearModelTraining: An example that trains a simple linear model.
  • llm-tool: A command line tool for generating text using a variety of LLMs available on the Hugging Face hub.
  • mnist-tool: A command line tool for training a a LeNet on MNIST.

I’d expect that we will see much more examples published soon, especially when we approach WWDC where I’d expect some big announcement around MLX there.

LLMEval

Large Language Models is the hottest topic among machine learning community since the release of ChatGPT. The ability to ask machine learning models to perform various tasks using natural language is very accessible for end users.

MLX Sample repository is provide a sample project, called LLMEval. LLMEval projects provide the following featus:

  • Download LLM models from huggingface (you can think huggingface as a Github for machine learning community)
  • Simple chat UI for interacting with LLM Models locally
  • Evaluation against common open source model such as Mistral, LLama and Phi

Running LLMEval

Step 1: Clone and open mlx-swift-examples project in Xcode

~ git clone https://github.com/ml-explore/mlx-swift-examples.git
~ cd mlx-swift-examples
~ xed .

Step 2: Adjust code signing

Adjust the code signing configuration to your (personal) team

code signing

Choose the LLMEval scheme to run

scheme

Testing the LLMEval macOS app

On the first launch of the app, it will download the model for local usage. It might take some time to download the model depending on the type of the model and your internet connection, so please be patient.

Test a different LLM model

The demo app has configurations for several LLM models listed in LLM folder

You can open ContentView.swift, change the line let modelConfiguration = ModelConfiguration.phi4bit to let modelConfiguration = ModelConfiguration.codeLlama13b4bit and run the app again to test the Code Llama model

Testing the LLMEval iOS app

To run the iOS demo app, things are a little bit more complicated. Unfortunately, iOS Simulators don’t support metal features, which are needed to run mlx-swift.

You will need to modify the app bundleID so that Xcode can generate a new provisioning profile for your development certificate, so that you can deploy the app on a real iOS device. Also you will need an iOS device with as much RAM as possible due to the memory-intensive nature of LLM models.

ios code signing

For testing purpose, I’m using a iPhone 15 Pro Max with 8GB RAM in the following video:

Testing the LLMEval visionOS app

I don’t have an Apple Vision Pro to test the demo app, but others have successfully deploy the app on real Apple Vision Pro devices:

Quick Drop logo

Profile picture

Personal blog by An Tran. I'm focusing on creating useful apps.
#Swift #Kotlin #Mobile #MachineLearning #Minimalist


© An Tran - 2024