As I explained to Jean-Francois above, I think the speed is helped by the fact that I am doing quite a bit of the spherical geometry calculations before writing the index. One of the biggest parts of what is left is calculating the invariant properties (hash code values) of the quads - this requires more 2D geometry than I think is strictly necessary. I do have an idea for an alternate way to get the hash codes calculated, but haven't had a chance to try it out yet.
In ASTAP the stars database contains the RA, DEC and magnitude. The magnitude is not really required since the star are sorted in magnitude. From the RA, DEC equatorial coordinates ASTAP calculates the standard coordinates according the rigid method. Maybe it could be simplified by just correcting the RA by the cos(Dec), I never tried. But I assume you will get quicker problems at the celestial pole.
For years I have considered to preprocess the stars directly in hash codes. I assume the database will grow significantly but if sorted/indexed maybe blind solve and normal solve will be executed in about the same speed. Likely then the FOV doesn't play a role either. The argument against it is that blind solve is rare.
Star detection is fortunately something that I have had for a long time in SharpCap for other features (polar alignment, live stacking, focus detection, etc), so I had code for that ready to use. I am still working on fine tuning the sensitivity for best results - I might need to add an end-user adjustment in the end for the sensitivity.
In my experience there is somewhere an optimum for star detection sensitivity. Too sensitive the solve success goes down due to too many ghost stars. In my program the limit is set at SNR 7 but it depend probably on the SNR definition.
I do have some triangle alignment code in SharpCap that is still used for live stacking alignment - it works, but does produce false positives at a higher frequency than the quad alignment, so I only use it for live stacking where the issue can be dealt with easily (there is always likely to be a good match and you can filter to only look at matches that have a scaling factor of ~1).
Yes quads seems to be the best setup. According some old Astrometry.net documentation using 5 stars figures results in more processing time and did not bring any real benefits.
For my documented setup I use the distances between the four stars. Astrometry.net uses a very different (complicated) quad hash code. I assume they can do that because their hash codes are preprocessed and If I remember well are stored in FITS files.
I think that the issue that you see with the failure message not showing is down to the use of multiple threads (up to 4) to parallelize the solving. Each thread sends a notification to the UI when it finishes checking a particular area of the sky, and that becomes the 'Searching at ...' message. Sometimes one of those message gets displayed after the 'Plate solve failed' message, so the failed message is lost. I think I have fixed this in 4.1.11239 by synchronizing the way that the messages are forwarded from the worker threads to the UI.
How do you split the threads? Are they all doing the same but a different part of the sky? I assume it depends if the star database is preloaded in memory. Then you can split the sky in e.g four parts. If the database is not preloaded in memory using threads is likley more complicated since all running threads will like to access the database on disk. I assume the star database of SharpSolve is the file psindex.bin and preloaded? Threaded processing is still on my wish list but I'm not sure if it will be much beneficial for short offsets.
ps. I use GAiA DR3 for the star database. The UCAC4 is fine but doesn't go that deep.
PS. Do you include non-stars in your database (I am mostly thinking of small objects that could be mistaken for stars like distant galaxies). I can't decide if this would be a good idea or not...
In the past I tried to avoid detecting small galaxies in the image by the ovality. I practice it didn't bring a benefit. Currently I just assume there are much more stars then galaxies detected and the galaxies do not disturb the solving process.
With Gaia (space telescope), the star density can be much higher then the UCAC4 (earth based telescope). This could be a problem for images where M13 fills most of the images but will be less of a problem for a database with a specific star density like you have now.
Cheers, Han