-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ENH]: investigate more reasonable default for reg
param in ProjCommon
#35
Comments
The typical way of doing is to have the shrink parameter proportional to the trace of the matrix |
... we could also remove the paramter all together. Why re-implement shrinkage if the user has all the options to estimate robust covariances using OAS, Ledoit-Wolf, etc. ? |
yes agreed
… Message ID: ***@***.***>
|
Give me some time to think about it
…On Nov 7, 2022, 23:11, at 23:11, Alexandre Gramfort ***@***.***> wrote:
yes agreed
> Message ID: ***@***.***>
>
--
Reply to this email directly or view it on GitHub:
#35 (comment)
You are receiving this because you were mentioned.
Message ID: ***@***.***>
|
The projection step in Riemann include 1/ a PCA (ProjCommon) 2/ a
normalization by the trace
1/ PCA is designed to overcome rank-deficient covs by making them full rank
at a reduced dimension.
If the covs are already full rank (eg. covs from Larib project) this step
is unnecessary as a full rank PCA keeps the same number of
dimensions, hence is just doing a rotation which has no impact on Riemann
thanks to affine invariance.
If the covs are not full rank this step is necessary for Riemann to work.
2/ normalization by the trace:
covs are scaled before vectorization by being divided by the mean trace *across
subjects*, *separately in each frequency bands*.
Initially made to handle numeric instabilities with the PCA for
rank-reduced covs.
This normalization could in principle make a difference in performance
since the scaling factor is different per freq band and we concatenate the
resulting band-specific vectors to make our final vector (input of the
ridge model).
If this scaling factor is similar per frequency band (at the limit if they
are all equal) the perf of Riemann stays the same thanks to
affine-invariance.
I think this is what happens with Lariboisiere full-rank covs on which
normalization makes almost no difference in performance.
But we can't be sure it is always the case in general.
Le lun. 7 nov. 2022 à 22:35, Denis A. Engemann ***@***.***> a
écrit :
… ... we could also remove the paramter all together. Why re-implement
shrinkage if the user has all the options to estimate robust covariances
using OAS, Ledoit-Wolf, etc. ?
—
Reply to this email directly, view it on GitHub
<#35 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AIC2QGXX5MB24S4FLHS3VDDWHFYYVANCNFSM6AAAAAARZLNYJE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Now for the Cam-CAN data, Apolline finds that you can avoid the 2nd step by not additionally regulating the covariance in PCA space. |
@DavidSabbagh and did you think about it? Meanwhile, @apmellot keeps having good results across all benchmarks using the new settings. |
I agree. Let's turn off that normalization by default, while keeping it as an available option. |
this should be addressed now by #51 – let's keep it open to see if we're happy |
@apmellot has made the discovery that the strange necessity to scale by trace of cov can be avoided when reducing the amount of shrinkage via
reg
inProjCommon
. This makes intuitively sense and I think we should simply not shrink by default, especially, as we commonly used regularized covariance estimates to begin with and dimensionality reduction has a shrinking effect.FYI @DavidSabbagh @agramfort
The text was updated successfully, but these errors were encountered: