so the code was just adding unnecessary complexity. The pipeline now uses
mp.pool to manage ffmpeg jobs as before.
This reverts commit f91109fb3e and deletes the
WorkThread class and its associated tests.
Also cleaned out irrelevant options from config.ini.example.
- Removed the encoder setting for transcodes i config.ini.example since only
software is supported now.
- Since TranscodeWhisperHandler no longer uses the job pool, removed the
jobsize setting in config.ini.example.
- Changed default pool and job size to saner values.
The entire WorkThread class should probably be dropped.
- Overwrote subtitles_whisper with subtitles_whisper_hack
- Moved the comment about it being a hack to the more specific hack spot
- Updated __init__.py to import from subtitles_whisper
The work invested in _hack is now significant enough that if automatic gpu
detection becomes viable again, the only meaningful starting point is
the _hack implementation. It's probably a good idea to remove _serial as well,
but leaving it around for now.
The function now takes an argument corresponding to the key that is
being worked on. Also made the handling of dict validation more explicit
about keys and the reasoning around why it does what it does.
- start time is now recorded as a timestamp
- typo fix: match.groups() -> match.group()
- Subtitle generation is only requested if the presentation has video on channel 1
* renamed the 'data' dict to 'arec_data' and made sure all references are consistent
* switched the recorder identifier from hostname to description, because there was a wild \n in the hostname for unknown reasons
* removed timezone awareness from starttime and endtime to make them compatible with times loaded from daisy
Also moved the code for creating a basic jobspec and pulling information from
the relevant daisy booking into the preprocessor superclass so it can be
called by both the cattura and arec preprocessors.
instead of after all the startup tsks are done. This alleviates a potential
problem where start() could be called more than once in quick succession
with unpredictable results.
in order to properly handle NFS-mounted queue directories.
As a consequence, the default timeout for awaiting job results in the tests
was upped to 10 seconds due to the slightly slower queue pickup.