Share
10 April 2026

Building Pixlite: A Deep Dive into WorkManager, Image Processing, and Clean Android Architecture

How I used Android’s WorkManager to build a production-grade image processing app with compress, blur, and gallery save — all guaranteed to complete even if the user closes the app.


Why I Built This

Every Android developer eventually hits the same wall: you kick off a background task, the user swipes the app away, and your work silently dies. File upload? Gone. Image processing? Cancelled. Analytics flush? Never happened.

This is the problem WorkManager was built to solve. But most tutorials stop at “here’s how to run a simple task.” I wanted to go deeper — show how WorkManager behaves in a real app with multiple workers, chaining, progress reporting, gallery saving, and sharing. PixelForge is that app.


What PixelForge Does

PixelForge is an Android image processing app built entirely with Jetpack Compose. It lets you:

  • Pick an image from your gallery using the modern system Photo Picker
  • Compress it — reduce file size by adjusting JPEG quality (10–100%)
  • Blur it — apply a Gaussian-style StackBlur effect (radius 1–25px)
  • Chain both — compress first, then blur the result
  • Save the output directly to Pictures/PixelForge/ in your gallery
  • Share it to any app via Android’s Intent system

Every single one of these operations — compress, blur, save — runs inside a WorkManager CoroutineWorker. Nothing touches the main thread.


The Architecture: Three Workers, One Chain

The heart of the app is three CoroutineWorker classes, each with a single responsibility.

ImageCompressWorker

This worker takes an image URI and a quality value (0–100), decodes the bitmap, re-encodes it as JPEG at the specified quality, and saves the result to the app’s cache directory.

class ImageCompressWorker(ctx: Context, params: WorkerParameters)
    : CoroutineWorker(ctx, params) {

    override suspend fun doWork(): Result {
        val uri     = inputData.getString(KEY_IMAGE_URI) ?: return Result.failure()
        val quality = inputData.getInt(KEY_QUALITY, 80)

        setProgress(workDataOf("progress" to 10))

        val bitmap = BitmapFactory.decodeStream(
            applicationContext.contentResolver.openInputStream(Uri.parse(uri))
        )

        setProgress(workDataOf("progress" to 60))

        val out = ByteArrayOutputStream()
        bitmap.compress(Bitmap.CompressFormat.JPEG, quality, out)

        val file = File(applicationContext.cacheDir, "compressed_${System.currentTimeMillis()}.jpg")
        FileOutputStream(file).use { it.write(out.toByteArray()) }

        return Result.success(workDataOf(
            KEY_OUTPUT_URI      to file.absolutePath,
            KEY_ORIGINAL_SIZE   to getOriginalSize(Uri.parse(uri)),
            KEY_COMPRESSED_SIZE to out.size().toLong()
        ))
    }
}

A few things worth noting here. setProgress() publishes intermediate state that observers can read from WorkInfo.progress — this is how the progress bar in the UI updates while the worker runs. The output data returned in Result.success() becomes available via WorkInfo.outputData once the state reaches SUCCEEDED. When workers are chained, this output data is automatically merged into the next worker’s input.

BlurWorker

The blur worker implements StackBlur — a fast O(w × h) algorithm that approximates Gaussian blur without the O(w × h × r²) cost of a naive implementation. Critically, it uses no RenderScript (deprecated in API 31) and no external libraries. Pure CPU, pure Kotlin.

class BlurWorker(ctx: Context, params: WorkerParameters)
    : CoroutineWorker(ctx, params) {

    override suspend fun doWork(): Result {
        val uriString = inputData.getString(KEY_IMAGE_URI) ?: return Result.failure()
        val radius    = inputData.getInt(KEY_BLUR_RADIUS, 10).coerceIn(1, 25)

        // Load bitmap — handles both file paths (from chain) and content URIs (standalone)
        val bitmap = if (uriString.startsWith("/")) {
            BitmapFactory.decodeFile(uriString)
        } else {
            applicationContext.contentResolver
                .openInputStream(Uri.parse(uriString))
                ?.use { BitmapFactory.decodeStream(it) }
        } ?: return Result.failure()

        val blurred = stackBlur(bitmap, radius)

        val file = File(applicationContext.cacheDir, "blurred_${System.currentTimeMillis()}.jpg")
        FileOutputStream(file).use { blurred.compress(Bitmap.CompressFormat.JPEG, 95, it) }

        return Result.success(workDataOf(KEY_OUTPUT_URI to file.absolutePath))
    }
}

Notice the dual URI handling. When BlurWorker runs standalone (Blur only mode), it receives a content:// URI from the image picker. When it runs chained after ImageCompressWorker, it receives a file path — the absolutePath written by the compress worker. The coercion handles both formats cleanly.

This is the worker most tutorials skip. Saving to MediaStore is I/O work — it absolutely should not block the main thread. Wrapping it in a worker also means the save completes even if the user navigates away mid-operation.

class SaveToGalleryWorker(ctx: Context, params: WorkerParameters)
    : CoroutineWorker(ctx, params) {

    override suspend fun doWork(): Result {
        val sourcePath  = inputData.getString(KEY_SOURCE_PATH) ?: return Result.failure()
        val displayName = "PixelForge_${System.currentTimeMillis()}.jpg"

        val uri = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q) {
            saveViaMediaStoreQ(File(sourcePath), displayName)
        } else {
            saveViaLegacy(File(sourcePath), displayName)
        }

        return if (uri != null) {
            Result.success(workDataOf(KEY_SAVED_URI to uri.toString()))
        } else {
            Result.failure(workDataOf("error" to "MediaStore insert failed"))
        }
    }
}

On API 29+, the worker uses the IS_PENDING flag — a MediaStore feature that marks the file as incomplete during the write. Other apps cannot see a file while IS_PENDING = 1. Once the write finishes, the flag is cleared and the image becomes visible in the gallery atomically. On older API levels, the worker writes directly to the public Pictures directory and notifies the media scanner.


Work Chaining: The Most Powerful Feature

When the user picks “Compress + Blur” mode, the app chains two workers:

val compressRequest = OneTimeWorkRequestBuilder<ImageCompressWorker>()
    .setInputData(workDataOf(
        ImageCompressWorker.KEY_IMAGE_URI to uri.toString(),
        ImageCompressWorker.KEY_QUALITY   to quality
    ))
    .addTag("image_processing")
    .build()

val blurRequest = OneTimeWorkRequestBuilder<BlurWorker>()
    .setInputData(workDataOf(
        BlurWorker.KEY_BLUR_RADIUS to blurRadius
        // KEY_IMAGE_URI is intentionally absent here!
    ))
    .addTag("image_processing")
    .build()

WorkManager.getInstance(context)
    .beginWith(compressRequest)
    .then(blurRequest)          // output of compress flows here automatically
    .enqueue()

The key insight is in the comment: KEY_IMAGE_URI is not passed to the blur request. WorkManager automatically merges compressRequest.outputData into blurRequest.inputData when they are chained. The compress worker writes KEY_OUTPUT_URI to its output, and the blur worker reads it as KEY_IMAGE_URI — matching keys, automatic flow.

The ViewModel observes both workers independently so the UI can show granular status — “Compressing… 60%” transitions to “Blur queued — waiting for compress…” and then “Blurring… 40%”.


Observing Work: Flow over LiveData

The app uses getWorkInfoByIdFlow() rather than the older LiveData variant. This fits naturally into the Compose + StateFlow architecture:

viewModelScope.launch {
    WorkManager.getInstance(context)
        .getWorkInfoByIdFlow(requestId)
        .collect { workInfo ->
            when (workInfo?.state) {
                WorkInfo.State.RUNNING -> {
                    val pct = workInfo.progress.getInt("progress", 0)
                    _uiState.update { it.copy(workStatus = "RUNNING ($pct%)", currentStage = "Compressing...") }
                }
                WorkInfo.State.SUCCEEDED -> {
                    val path = workInfo.outputData.getString(KEY_OUTPUT_URI)
                    _uiState.update {
                        it.copy(isWorking = false, outputImageUri = path?.let { Uri.parse("file://$it") })
                    }
                }
                WorkInfo.State.FAILED -> { /* handle */ }
                else -> {}
            }
        }
}

WorkInfo.State is an enum with six values: ENQUEUEDRUNNINGSUCCEEDEDFAILEDCANCELLED, and BLOCKED. The BLOCKED state is particularly useful in chains — it tells you a worker is queued but waiting for its predecessor to finish.


The Double Image Picker Bug — and How We Fixed It

During development, the image picker was opening twice on certain devices. Here is exactly why.

The original code used two launchers chained together:

// OLD CODE — DO NOT USE
val pickerLauncher = rememberLauncherForActivityResult(GetContent()) { uri ->
    uri?.let { vm.onImageSelected(it, context) }
}
val readPermLauncher = rememberLauncherForActivityResult(RequestPermission()) { granted ->
    if (granted) pickerLauncher.launch("image/*")
}
// Triggered by button tap:
readPermLauncher.launch(READ_MEDIA_IMAGES)

On devices where the permission was already granted, RequestPermission delivered its result synchronously — inside the same Compose frame that was still processing the button tap. At that exact moment, Compose had a pending recomposition scheduled. Some OEM implementations of the activity result APIs (particularly Samsung and Xiaomi) re-delivered the result during the recomposition, causing pickerLauncher.launch() to be called a second time.

The fix is to eliminate the chain entirely. The Android Photo Picker (ActivityResultContracts.PickVisualMedia) is a system-provided image chooser that requires no runtime permission on any API level. One launcher, one tap:

// FIXED CODE
val pickerLauncher = rememberLauncherForActivityResult(
    contract = ActivityResultContracts.PickVisualMedia()
) { uri: Uri? ->
    uri?.let { vm.onImageSelected(it, context) }
}
// Triggered by button tap:
pickerLauncher.launch(PickVisualMediaRequest(PickVisualMedia.ImageOnly))

No permission dialog. No callback chain. No double-open. The system photo picker also gives users finer-grained control — they can grant access to specific photos rather than your entire library, which is a privacy improvement for users.


UI: Compose + StateFlow

The entire UI is driven by a single CompressUiState data class:

data class CompressUiState(
    val selectedImageUri: Uri?    = null,
    val outputImageUri: Uri?      = null,
    val savedToGalleryUri: Uri?   = null,
    val mode: ProcessMode         = ProcessMode.COMPRESS_ONLY,
    val quality: Int              = 80,
    val blurRadius: Int           = 10,
    val isWorking: Boolean        = false,
    val isSaving: Boolean         = false,
    val isSaved: Boolean          = false,
    val workStatus: String        = "",
    val currentStage: String      = "",
    val originalSize: String      = "--",
    val outputSize: String        = "--",
    val savedPercentage: String   = "--",
    val errorMessage: String?     = null
)

The ViewModel exposes this as a StateFlow, the screen collects it with collectAsState(), and every UI element derives its appearance purely from state. There is no imperative UI code anywhere — the progress bar, save button state, image previews, and status chip all react to state changes emitted by the WorkManager observers.


What I Learned

WorkManager is not a thread pool. It is a persistent job scheduler. The distinction matters: a thread pool runs work now, on this process. WorkManager schedules work for the operating system to run, possibly in a future process. That is why work persists across reboots — the OS relaunches your app specifically to run it.

Chaining is more powerful than it looks. The automatic output-to-input piping means you can compose independent workers into pipelines without any of them knowing about each other. ImageCompressWorker has no import for BlurWorker. They are fully decoupled — the chain is assembled at the call site in the ViewModel.

FileProvider is not optional. From Android 7.0, passing a file:// URI to another app via an Intent throws a FileUriExposedException. FileProvider converts cache files to content:// URIs with scoped, permission-gated access. Every share feature needs it.

PickVisualMedia should be your default picker. The old GetContent("image/*") approach requires a storage permission, which requires a launcher, which creates the exact chaining problem that caused our double-picker bug. The system Photo Picker sidesteps all of this.


Conclusion

PixelForge is a small app but it covers a large surface area of production Android development: WorkManager with progress reporting, worker chaining, MediaStore integration, FileProvider sharing, the system Photo Picker, StateFlow-driven Compose UI, and proper separation of concerns between workers, ViewModel, and UI.

The full source is in this repository. Each file is heavily commented to explain not just what the code does, but why each decision was made.


Built with Kotlin, Jetpack Compose, WorkManager 2.9, and Coil 2.5. Minimum API 21.



Source Code

https://github.com/rishiz-n/Pixlite

Built with: Kotlin · Jetpack Compose


📚 References

Room Persistence Library

https://developer.android.com/develop/background-work/background-tasks/persistent

https://rushira.in/https-rushira-in-room/