Hacks used to migrate from Trac sqlite or Postgresql dump to GitHub.
Works with Python 3.8.
# Wiki migration
For wiki migration, you will need git available in your dev environment.
This is a 3 stage process:
- Copy config.py.sample over config.py, and edit all the settings.
- Create the GitHub Wiki pages using content formated as TracWiki. This is done to have better diffs between historic versions.
- Convert the last version of the each page to ReStructuredText, or to any other format.
Create a virtualenv:
virtualenv build . build/bin/activate mv config.py.sample config.py
Modify the config.py values.
All pages are generated into a flat file structure. Spaces are used instead of path separators:
python wiki_migrate.py PATH/TO/Trac.db3 PATH/TO/GIT-REPO
You might want to add a _Sidebar.rst file in the root with:
* `<Administrative>`_ * `<Development>`_ * `<Infrastructure>`_ * `Services <Infrastructure-Services>`_ * `Machines <Infrastructure-Machines>`_ * `<Support>`_
For wiki content conversion:
python wiki_trac_rst_convert.py PATH/TO/GIT-REPO
Things that are not yet auto-converted:
- TracWiki 3rd level heading === Some sub-section ===
- Sub-pages listing macro [[TitleIndex(Development/)]]
- Local table of content [[PageOutline]]
- Manually create _Sidebar.rst and _Footer.rst GitHub wiki meta-pages.
# Ticket migration
The script to use is ticket_migrate_golden_comet_preview.py.
- Copy config.py.sample to config.py, and edit all the settings. Perhaps use a fake OAUTH_TOKEN to avoid accidental changes.
- Dump and convert the Postgres DB to SQLite, if you don't already have it, using postgres-to-sqlite.sh. * scp postgres-to-sqlite.sh [email protected]:/tmp/postgres-to-sqlite_`date -I`.sh * ssh [email protected] * sudo su trac * cd /tmp * ./postgres-to-sqlite_`date -I`.sh # Will take ~1 minute for ~10k tickets. * ^D # Exit su trac * ^D # Close SSH * scp [email protected]:/tmp/results.sqlite3 results-date -I.sqlite3 # About 70M for 10346 tickets
- Create required files: touch tickets_created.tsv && touch tickets_expected_gold.tsv && touch milestones_created
- Modify select_tickets to your liking. Perform a dry run, generating tickets_expected.tsv.
- Once the system generated the desired tickets_expected.tsv, copy it as tickets_expected_gold.tsv, to check against tickets_expected.tsv generated by future runs.
- Run a dry run once more, checking for "Warning: unknown ticket:" messages. There should be none, if all required tickets are in tickets_expected_gold.tsv.
- If you are sure you want to create tickets, change DRY_RUN to False in ticket_migrate_golden_comet_preview.py.
- Run python -u ./ticket_migrate_golden_comet_preview.py ../trac.db | tee -a output.txt, where ../trac.db is the path to the Trac SQLite DB dump.
- By the first non-dry run breakpoint: * the milestones will have been created. Check milestones_created.tsv. * the new tickets_expected.tsv must match tickets_expected_gold.tsv. * If all is in order, continue by entering c at the debugger.
In the event a new ticket or PR is created while the script is running, you must manually add a fake entry to tickets_created.tsv so that, on retrying, as much as possible of tickets_expected.tsv still matches tickets_expected_gold.tsv. But you must manually fix any references to that GitHub ID.