The package is a mediator between Apple's Core ML Stable Diffusion implementation and your app that let you run text-to-image or image-to-image models
let manager = GenerativeManager()
let images: [CGImage?] = try await manager.generate(
with: config,
by: pipeline
)
The speed can be unpredictable. Sometimes a model will suddenly run a lot slower than before. It appears as if Core ML is trying to be smart in how it schedules things, but doesn’t always optimal.
File Name | Description |
---|---|
TextEncoder.mlmodelc |
Encodes input text into a vector space for further processing. |
Unet.mlmodelc |
Core model handling the transformation of encoded vectors into intermediate image representations. |
UnetChunk1.mlmodelc |
First segment of a segmented U-Net model for optimized processing in environments with memory constraints. |
UnetChunk2.mlmodelc |
Second segment of the segmented U-Net model, completing the tasks started by the first chunk. |
VAEDecoder.mlmodelc |
Decodes the latent representations into final image outputs. |
VAEEncoder.mlmodelc |
Compresses input image data into a latent space for reconstruction or further processing. |
SafetyChecker.mlmodelc |
Ensures generated content adheres to safety guidelines by checking against predefined criteria. |
vocab.json |
Contains the vocabulary used by the text encoder for tokenization and encoding processes. |
merges.txt |
Stores the merging rules for byte-pair encoding used in the text encoder. |
- You need to have Xcode 13 installed in order to have access to Documentation Compiler (DocC)
- Go to Product > Build Documentation or ⌃⇧⌘ D