Warning:
Never change the values of the flags, changing it will introduce inconsistencies in the file structures and I cannot merge the PR, if the script was run with flag values changed.
Setting checkduplicate to true will make the script check for duplicate translations and avoid adding new translations if a duplicate copy of it already exists
Setting jsonrequired to true will make the JSON in the translation required and the script will fail if no json was found in the translation file
Setting generateLatin to true will make the script auto generate the latin/roman script translation using translate.py script
This flag is set to true by GitHub actions during workflow run. Setting the CI to true in the OS environment will make the script to generate all files and folders in REST architectural style, which is usually not required during development.
It starts by reading the files inside start directory and checks for json and also validates and correct the translation incase it's not in proper format, after the translation passes this step, it will check for any duplicate translation in the database to avoid adding duplicate translations.
Now the json is added with other values such as link etc, language direction value is detected using browser and incase the language was not specified properly, it is detected using translate.py, but many time it returns wrong value, and that's the reason specifying language is required.
editionname is auto generated using the json values to maintain standard naming. Now the files are generated using info.json
Now if the translation was in latin script with diacritical marks, then editionName-la version is generated i.e edition without diacritical marks, it goes through the same process which the original translation went, i.e JSON Validity checks, language detection, but uses the editionName of the original translation and doesn't auto generate editionName this time, but only adds '-la' to it
Now if the translation was in other script such as chinese, then the latin version will be generated using translate.py script, it goes through the same process which the original translation went, i.e JSON Validity checks, language detection, but uses the editionName of the original translation and doesn't auto generate editionName, only it adds -la' or '-lad' depending on the generated translation
After everything is done, the files for which the create was successful will be moved to database/originals directory. The start directory will be empty, or it will have only those files for which the translation generation process was not successful.
After this the editions.json is updated to reflect the directory changes Incase, the start directory was not empty after running create commad, then you can run the create command again to see what errors that translation file is having.
It goes through the same process as of create, except that the json values from the newly updated file is given more priority and also a file with the same name should exist in the database, and that's the reason we copy it from database/chapterverse folder ,so that it can run smoothly
It deletes the edition and regenerates the editions.json to reflect the directory changes
Searches the database for a given string, it uses regex so commas, special symbols, double spaces etc all are ignored, so it's highly accurate
It first renames fontnames to standard names, removing all duplicate names etc, then generate the fonts using fontsquirrel, then the generated fonts are moved one by one to fonts directory
The original files from start directory will have -org suffix added to it while it's moved to fonts directory. fonts.json and fonts.min.json is generated using the fonts inside fonts directory, the metadata of fonts is added using opentype.js
It takes an array of string and returns back the translation in json format. To detect language, the first string should be detect and next argument should be string of language to detect
It uses googletrans python library which inturn uses google translate, to translate a given text, we will use it to only get the latin script of the input lanugage, we don't care about the translated text
In future it's possible that the googletrans might break, but we can still get the functionality using browser and get the latin script translations from Google Translate
This repo uses GitHub Actions to perform all the operations automatically duing push to this repo, the code is in run.yml. The workflow can be triggered either by adding files in start directory and/or by editing the command.txt file as shown here. It can also be triggered manually from Actions tab.
During the workflow run, the repo is partially cloned and then the commands get stored in environment variable from command.txt or from manually entered values during manual run. 1st line in command.txt is used as apiscript.js command and 2nd line in command.txt is used as arguments for apiscript.js command
Then the sparse checkout arguments are dynamically generated depending upon the apiscript.js command and then pip & npm packages cache data is used, to save resources. Then the dependencies are installed using requirements.txt and package-lock.json. package-lock.json was automatically generated using package.json during installation phase.
Then the apiscript.js command is executed, the command.txt gets emptied and logs saved and the changes are commited and pushed to this repository.
Note:
In Case you are trying to modify how files in editions directory look or how chapterverse directory files look, then you might want to set CI flag to true
- Fork quran-api repo
- clone the forked repo:
git clone --filter=blob:none --no-checkout --depth 1 --sparse <YourFork.git>
cd quran-api
git sparse-checkout set /* !/editions/ !/database/originals/ !/database/chapterverse/
git checkout
- Do your changes and test the code.
- Now Push the Changes and Create PR
In case you still have questions refer apiscript.js, translate.py, run.yml and also you can ask me