Graphics

OS in Rust

Homework

  • Homeworks are due on the Friday after the week they correspond to lecture.
  • So 9 days after the corresponding lab.

Requirements

Be advised this is a multistage lab that is intended to be interesting rather than easy.

Step 0

  • Complete the “Format” lab.

Step 1

Add a function to src/main.rs to display all background colors in qemu via the VGA buffer.

  • For this lab, we will display images using the VGA text buffer background color.
  • To do that, we need to know what colors we have.
  • To learn what colors we have, we will first display all of them.

Colors

  • I created a new file.
    • I am planning to not use it after the lab so I factored it out.
src/colors.rs
pub fn colors() {
    // I had 9 total lines here.
    // 1 line was in an unsafe block (which was 3 lines total)
}

Main

  • There are two minor changes to main.
    • I call colors in _start and nothing else.
    • I add mod colors; early in the file.
src/main.rs
#![no_std]
#![no_main]

mod colors;
mod vga;

#[panic_handler]
fn panic(info: &core::panic::PanicInfo) -> ! {
    println!("{}", info);
    loop {}
}

#[unsafe(no_mangle)]
pub extern "C" fn _start() -> ! {
    colors::colors();
    loop {}
}

Your task

  • Create 16 evenly sized columns, one for each background color, across the screen.
  • Display no text.
  • Recall that there are 80 horizontal by 25 vertical characters.
  • Recall that there are 720 horizontal by 400 vertical pixels.
  • I am creating the below example using hexcodes I extracted and raw HTML.
    • It is hardcoded to 720 \(\times\) 400 and may look odd on some screens.
    • It is not an image.
  • It is obviously trivial to extract color values from this.
  • Fortunately, you are still required to include a src/colors.rs in your 52 folder.

Step 2

Display and screendump the background colors to .ppm

  • Easy step.
  • Once your qemu is displaying as above, you are ready to proceed.

The Monitor

  • qemu has more features than I can recall anymore; most of them very cool.
  • One is the “Monitor”
  • Within the QEMU window, you can use

    Ctrl+Alt+2

  • From here, you can input a variety of commands.
  • At least try help and have at least on thought about it.
    • qemu is listed on a job ad for a Google position in Portland starting at 113k/yr as I type this.

Screendump

  • Try to following.
    • You can choose your own filename.
    • I used dump.ppm
    • You will want to use .ppm
screendump dump.ppm
  • QEMU documentation states:

Save screen into PPM image filename.

Step 3

Extract hex values from the .ppm

PPM files

  • The PPM file format is ancient and I’m not sure of the authoritative documentation.
  • This appears correct to me. https://netpbm.sourceforge.net/doc/ppm.html
  • You don’t need to dwell on it overlong but:
    • There is a header that gives the filetype, the size, and highest brightness value.
    • There’s a block write of pixels.

Step 4

Create a script to convert images from url to VGA buffer array

The following instructions assume students will proceed using Python’s NumPy package. This fulfills a secondary learning objective of familiarity with vector operations that were discussed briefly here.

  • As a fun bit, you may want to set up your script to optionally accept a URL at command line.
    • I did this, and it made testing a bit easier.
    • I also had an IMG_URL pointed at something that would otherwise display if there was no argument.

Advanced students will already be familiar with these operations and with Python. They should complete Step 4 in CUDA C++, fulfilling the extension learning objective of familiarity with GPU programming and bus operations as mediated by the host/device metaphor. If you do not have access to a physical NVIDIA GPU, I provide instructions on usage in Colab here.

Learning CUDA is straight-forward and an all-around good time. Learning C++ is quite the opposite. By combining both, you can practice moderation.

CUDA Programming Guide

Use of Numpy

Download

Given this is an advanced class, I’m going to trust you to do the following without providing example code.

  • Regard it as “unsporting” to download a file and work with a local copy.

  • Use requests to get an image by url.

  • Use BytesIO to interpret the HTTP response as a file.

  • Use PIL’s Image API to open the image.

  • Coerce the image to a NumPy array using NumPy.

Images

I only got .jpg and .webp to work for some reason, but it’s not a big deal and you aren’t required to learn PIL.

  • I took this myself so it is definitely free to use.
    • Her name is Ursula.
  • This is a wide picture that should be fine, from a tourism bureau.
  • And maybe the best for testing, an Adobe Stock rainbow.
    • The url looked unstable so I’m rehosting.
    • Hopefully I’m hosting stably.
    https://cd-rs.github.io/os/img/rainbow.jpg

Scale

  • Do I look like I know how to scale images?
    • Probably either sample or average.
    • I don’t think I care.
  • My code scales to fit the VGA buffer horizontally.
    • Then either “squish” or “stretch” vertically.
    • You could also crop horizontally.
      • Doesn’t matter to me.
    • Recall, the VGA buffer has more pixels than we are addressing.
    • You can only address characters, which are many pixels in size.
    • Your image will be low resolution.

Color Map

Code Structure Note

I included my code extracting hex-values from dump.ppm here.

I thought it made more sense than making a new file, it shared similar imports, and was only a few lines.

Plus, if I run on another system, which may have a different color representation, I’ll be able to quickly generate the .ppm and have no other steps!

  • You image will include colors that cannot be recognized by the VGA text buffer.
  • You will need to compute color distance between the
    • Color you are trying to show, and
    • The 16 available colors you computed in a previous step.
  • A naive approach would be to regard colors in 3-space with dimensions of R, G, and B values.
    • This is appropriate, but sub-optimal.
    • So it’s what I did.

Advanced students will already be familiar with distance in \(n\)-space and hexcodes and should use a perceptually uniform color space. In this case, L*a*b.

Read more

There is first class support in Python, I hear, and you can learn more from this poster, but one of Professor Brown’s undergraduate researchers.

Poster

You are doing this in NumPy to take advantage of vectorized operations.

  • Read more
  • Regard it as “unsporting” to use loops for a single pixel.
    • I didn’t vectorize across the full pixel array because I converted to a string in the same stage (as a list comprehension).
    • You will want to look up “argmin”.

Output

  • I used a list comprehension to map my colormapper across the scaled pixel array, format to string, and write to a .rs file.
    • You are not required to use a single line, obviously.
  • I want to show file sizes.
    • I wrote only single lines of code (no blocks) with open lines between.
    • It was not minimized but it was not verbose either.
      • 5 imports
      • 4 “constants”
        • Python doesn’t have constants.
      • 6 lines of executable code.
      $ wc url_to_rs.py src/colors/img.rs
      22   75  704 url_to_rs.py
       0 2006 9257 src/colors/img.rs
  • I wrote to src/colors/img.rs as it was I think the place a file used by src/colors.rs should be.
    • I didn’t love this because I had to make a new directory.
      • I just mkdir once rather than scripting this step.
    • Do whatever you want here, as long as your url_to_rs.py writes to the right place.

Step 5

  • Add a image() function to src/colors.rs that:
    • Reads the image data from src/colors/img.rs
    • Writes the image data to the VGA text buffer as background colors.
  • Call this function from src/main.rs
  • Look at your image.

Example

  • Here is my result over the “Mt. Massive” image from the tourism bureau.

Original

  • Hosting cross-site.

VGA Text Buffer

  • Extracted to .ppm, converted to .png, converted to base64 and directly embedded.