Hi, how can we help you today?

Qwen Image Edit LoRAs Guide

This guide introduces a suite of custom LoRA models specifically designed to work with Qwen Edit, inside Scenario’s “Edit With Prompts“ interface. Each LoRA includes a description of its primary purpose, guidance on how to prompt it, and a link to its corresponding repository on Hugging Face.

The base model, Qwen Image (aka Qwen-Image-Edit 2509), is a major update in the Qwen-Image series. It supports multi-image editing (1–3 inputs), delivers more consistent single-image edits (including people, products and text), and offers native compatibility with ControlNet conditions such as depth maps, edge maps and keypoints.

By layering specialized LoRA models on top of the Qwen-Image-Edit base model, Scenario enables creators to perform highly targeted editing workflows (such as virtual camera movement, texture application, scene coherence, and product-to-background integration) with significantly greater precision and creative control.

An example of the Apply Texture Qwen Edit LoRA: “Apply lava texture to the mug


Using Qwen Edit LoRAs in Scenario

To use a LoRA with Qwen Edit in your generation platform:

  1. Go to Edit With Prompt: https://app.scenario.com/edit-with-prompts

  2. Select “Qwen Edit” model

  3. Choose a LoRA in the LoRA panel on the left, from the list of available model (Multiple Angles, Apply Texture, Photo to Anime, etc.).

  4. Paste the correct instruction for that LoRA into the Instructions field of your prompt. (All required instructions are provided in the sections below.)

Multiple Angles

HF: dx8152/Qwen‑Edit‑2509‑Multiple‑angles

This LoRA enables Qwen-Image-Edit to behave like a virtual camera operator. It has no special trigger words: simply describe the camera motion or lens you want. The model can move the camera forward or backward, left or right, up or down, rotate the view by 45°, and switch between wide-angle and close-up lenses.

  • “Move the camera forward and tilt it downward to create a top‑down view.”

  • “Rotate the camera 45 degrees to the left and switch to a wide‑angle lens.”

Behind the scenes, the LoRA modifies the model’s latent camera parameters while preserving object identity and overall scene consistency.

Example: The sample output on the model card demonstrates how this LoRA can change the viewpoint of the same object across different camera positions.


Apply Texture

HF: tarn59/apply_texture_qwen_image_edit_2509

Applies a specified texture to an object, building, or game asset. Originally inspired by a commercial need for applying consistent stylistic textures across multiple assets.

Apply {texture description} texture to {target}.

For example:

  • “Apply wood siding texture to building walls”

  • “Apply salmon texture to leaves and stems”

The phrase “Apply … texture to …” is required to activate the LoRA. The model can also apply recursive textures, such as applying a texture of “a woman holding a sign” directly onto a sign surface.

Example:


Photo to Anime

HF: autoweeb/Qwen‑Image‑Edit‑2509‑Photo‑to‑Anime

This Qwen Edit LoRA transforms photographs—portraits or scenes—into an anime-style illustration. The LoRA is fine-tuned on Qwen-Image-Edit-2509 and works best with high-resolution portraits or clean images. Just prompt with a simple instruction such as:

Transform into anime

Because this LoRA applies a strong anime stylization—especially to faces—it works best with clear, well-lit headshots and simple backgrounds. The Hugging Face model card provides several before-and-after samples showing how photos are transformed into anime illustrations.


Edit Skin

HF: tlennon‑ie/qwen‑edit‑skin

Edit Skin is a specialized LoRA for enhancing the realism of human skin. It adds pores and subtle surface texture to faces and bodies, resulting in a more natural appearance compared with the base model. A LoRA strength of 1.0–1.5 is recommended (the examples use higher values for demonstration purposes). This LoRA is particularly useful for digital artists, photographers, and AI creators who want to improve the realism of portraits.

How to prompt: 

“make the subject’s skin details more prominent and natural”.

You can vary the phrasing slightly, but including this clause helps the model apply the LoRA. Keep in mind that this LoRA focuses on skin details and will not alter other aspects of the image.


3D Chibi

HF: rsshekhawat/Qwen‑Edit‑3DChibi‑LoRA

This LoRA generates stylized 3D chibi characters with a detailed, three-dimensional look. Trained on the Qwen-Image-Edit-2509 base model, it specializes in producing high-quality 3D Chibi-style images. To use it, enable the LoRA and include a prompt such as:

Convert this image into 3D Chibi Style.

The key trigger phrase is “3D Chibi Style.” You can also specify clothing, pose, or accessories, and the model will reinterpret the subject accordingly in chibi form.


Fusion

HF: dx8152/Qwen‑Image‑Edit‑2509‑Fusion

The Fusion Qwen Edit LoRA is designed to blend a product or object seamlessly into a new background. It adjusts perspective, lighting, and overall visual alignment so the object appears naturally integrated within the scene.

To use it, use the prompt:

Fuse the image, correct the product’s perspective and lighting, and make the product blend into the background.

These phrases should be included along with a description of the scene you want to place the object into. Example:


Next Scene

HF: lovis93/next‑scene‑qwen‑image‑lora‑2509

Next Scene is a cinematic LoRA that helps Qwen-Image-Edit produce consistent, storytelling-style image sequences. It has been trained to recognize camera motion, composition, and scene continuity, making each generated frame feel like the next shot in a film.

How to use:

  1. Set the LoRA strength to 0.7–0.8.

  2. Begin your prompt with “Next Scene:” followed by a description of the shot.

  3. For example:

“Next Scene: The camera moves slightly forward as sunlight breaks through the clouds, casting a soft glow around the character’s silhouette in the mist.”

The author recommends starting with the intended camera movement, specifying lighting or atmosphere changes, and chaining generations together to build storyboards. This LoRA is ideal for multi-frame workflows—such as cinematic sequences or AI video pipelines—but it is not intended for single-image portraits.


Light Restoration

HF: dx8152/Qwen‑Image‑Edit‑2509‑Light_restoration

Light Restoration removes harsh or overly strong lighting from an image and then re-illuminates it with soft, natural light. The author explains that creating lighting-focused LoRA datasets typically requires matching pairs of “lit” and “no-light” photos—a process that is both difficult and time-consuming. This LoRA avoids that requirement by allowing you to start from a lit image, remove the strong light, and generate a clean, natural-looking “no-light” version.

To use it, include a prompt such as:

Remove the shadows and re‑illuminate the picture with soft light

Example: The LoRA removes strong shadow and adds gentle lighting.

White to Scene

HF: dx8152/Qwen‑Image‑Edit‑2509‑White_to_Scene

This LoRA converts white-background product shots into realistic, context-appropriate scenes. To activate it, begin your prompt with “Convert white-background picture to a scene”, then describe the environment you want the product placed in. The model takes care of blending the subject into the new setting by adjusting lighting, shadows, and perspective so the result looks naturally integrated.

To use it, include a prompt such as:

Convert this white-background sofa photo into a realistic scene and position it in (describe scene)

The LoRA then integrates the subject into the specified environment, adjusting lighting, shadows, and overall color balance so the composition looks natural. It is often paired with the Lightning LoRA for improved quality and cleaner scene integration.

Example: a sofa photographed on a white background can be seamlessly composited into a lifestyle scene.


In‑Scene

HF: flymy‑ai/qwen‑image‑edit‑inscene‑lora

In‑Scene is an open‑source LoRA by FlyMy.AI that specializes in editing within the same scene. It enhances control over scene composition, object positioning and camera perspective and maintains coherence during edits. Internally it focuses on scene coherence, object positioning, camera perspective, action sequences and contextual editing.

To use this LoRA, provide a prompt describing the shot and action. The model card gives the following example:

Make a shot in the same scene of the left hand securing the edge of the cutting board while the right hand tilts it, causing the chopped tomatoes to slide off into the pan; camera angle shifts slightly to the left to centre more on the pan.

This LoRA is particularly useful for multi‑step cooking demonstrations, manufacturing processes or any scenario where you need to edit an object while preserving the original scene.

Example: The image below compares outputs without and with the In‑Scene LoRA. The LoRA‑edited output maintains spatial relationships and correctly positions the moving objects.


Upscale (Restore)

HF: vafipas663/Qwen‑Edit‑2509‑Upscale‑LoRA

Upscale improves image quality by recovering details lost to low resolution, oversharpening, noise, blur and compression. It was trained on a filtered subset of the Unsplash‑Lite and UltraHR‑100 K datasets and is designed to recover from severe degradations including low resolution (up to 16×), 50 % noise, Gaussian blur, JPEG artifacts and motion blur. The LoRA is still under development, and the author notes that multiple checkpoints are being tested.

How to use: Begin your prompt with “Enhance image quality,” followed by a detailed description of the scene.

Because the LoRA was trained on photographic imagery, it is not suited for 2D art or illustrations.

Example: The following photo showcases the LoRA’s ability to recover fine details from a degraded input.

Was this helpful?