NAME ^

docs/tests.pod - Testing Parrot

A basic guide to writing and running tests for Parrot ^

This is quick and dirty pointer to how the Parrot test suite is executed and to how new tests for Parrot should be written. The testing system is liable to change in the future, but tests written following the guidelines below should be easy to port into a new test suite.

How to test parrot ^

The easy way to test parrot is running make test. If you have updated your code recently and tests began failing, go for a make realclean and recompile parrot before complaining.

If your architecture supports JIT, you can test parrot JIT engine using make testj. It works just like make test, but uses the JIT engine when possible.

make languages-test runs the test suite for most language implementations in the languages directory.

Submitting smoke test results ^

Parrot has a status page with smoke test results http://smoke.parrotcode.org/smoke/. You can supply new tests results just running make smoke. It will run the same tests as make test would, but creating a HTML table with the test results. At the end, it will try to upload the test results to the smoke server.

It is also possible to run a smoke test on JIT. For that, try running make smokej.

make languages-smoke does smoke testing for most language implementations in the languages directory.

Location of the test files ^

The parrot test files, the *.t files, can be found in the t directory. A quick overview over the subdirs in t can be found in t/README.

The language implementations usually have their test files in languages/*/t.

New tests should be added to an existing *.t file. If a previously untested feature is tested, it might also make sense to create a new *.t file.

How to write a test ^

The testing framework needs to know how many tests it should expect. So the number of planned tests needs to be incremented when adding a new test. This is done near the top of a test file, in a line that looks like:

  use Parrot::Test tests => 8;

Parrot Assembler ^

PASM tests are mostly used for testing ops. Appropriate test files for basic ops are t/op/*.t. Perl Magic Cookies are tested in t/pmc/*.t. Add the new test like this:

    pasm_output_is(<<'CODE', <<'OUTPUT', "name for test");
        *** a big chunk of assembler, eg:
        print   1
        print   "\n" # you can even comment it if it's obscure
        end          # don't forget this...!
    CODE
    *** what you expect the output of the chunk to be, eg.
    1
    OUTPUT

Parrot Intermediate Representation ^

Tests can also be written in PIR. This is done with pir_output_is and friends.

    pir_output_is(<<'CODE',<<'OUT','nothing useful');
        .include 'library/config.pir'

        .sub main :main
            print "hi\n"
        .end
    CODE
    hi
    OUT

C source tests ^

C source tests are usually located in t/src/*.t. A simple test looks like:

    c_output_is(<<'CODE', <<'OUTPUT', "name for test");
    #include <stdio.h>
    #include "parrot/parrot.h"
    #include "parrot/embed.h"

    static opcode_t *the_test(Parrot_Interp, opcode_t *, opcode_t *);

    int main(int argc, char* argv[]) {
        Parrot_Interp interpreter;
        interpreter = Parrot_new(NULL);

        if (!interpreter)
        return 1;

        Parrot_init(interpreter);
        Parrot_run_native(interpreter, the_test);
        printf("done\n");
    fflush(stdout);
        return 0;
    }

    static opcode_t*
    the_test(Parrot_Interp interpreter,
        opcode_t *cur_op, opcode_t *start)
    {
        /* Your test goes here. */

        return NULL;  /* always return NULL */
    }
    CODE
    # Anything that might be output prior to "done".
    done
    OUTPUT

Note that it's always a good idea to output "done" to confirm that the compiled code executed completely. When mixing printf and PIO_printf always append a fflush(stdout); after the former.

Testing language implementations ^

Language implementations are usually tested with the test function language_output_is.

Ideal tests: ^

TODO tests ^

In test driven development, tests are implemented first. So the tests are initially expected to fail. This can be expressed by marking the tests as TODO. See Test::More on how to do that.

SKIP tests ^

TODO test actually executed, so that unexpected success can be detected. In the case of missing requirements and in the case of serious breakdowns the execution of tests can be skipped. See Test::More on how to do that.

SEE ALSO ^

http://qa.perl.org/


parrot