-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate means of authenticating a node ID when adding to NodeGraph
#322
Comments
There are 4 cases where we add a node to the node graph.
Manually adding a node should ping the node before adding it to the Pinging a node will add the node to the graph but this should be done in the CLI ping command's GRPC handler. having the Adding a node after connecting to it can be handled by the nodeConnectionManager in the receiving the connection will be verified by the reverse proxy and then added to the nodeGraph. this is addressed in #344 |
WIth the changes in commit 7cc74d0 I added a check for if a node was alive with In any case we ill need to connect to a node and check it's nodeId and certificates data. maybe we can create a new agent-agent RPC for getting a remote node's certificates and nodeId? isn't this already what |
Yea the closest issue is #353, by creating a timeout for I think however the proxies Also relevant is #297 as background theory into the future. |
Furthermore solving #353 would just reduce the amount of time waiting for a ping. To really solve the problem of blocking, is to remove blocking entirely. That is we wouldn't want to block a request to contact a Node or do operations against another Node due to pinging old nodes in order to maintain bucket limit. I suggest that this logic should be non-blocking, and to do so, we can either do non-blocking in-memory by queuing up a job using JS's own event loop, or by queuing it up into the DB and run this as part of a background work as suggested by the longer-kademlia trie paper at the end. |
Suggest creating a new issue for this non-blocking problem as it relates to #353. |
Ah I just remembered that the feature added in 7cc74d0 just checks that the old node is still alive. as mentioned above authenticating it still needs to be done at this stage and that's only really needed to make sure that the node hasn't changed identities. it should've been authenticated before it was added to the graph. For adding a new node I think the expectation is that the authentication is done when connecting to it just before adding it to the nodeGraph. We could use the same method we use to authenticate the old nodes but we would be doubling up the action of connecting to the node since the action of adding a node is triggered by a connection to a node in the first place. As to how the node is authenticated we would need to obtain the connection info from the newly combined |
Ignore this, moved old spec back. |
I think the proxy open connection already does authentication by checking the certificate. Therefore the needing to authenticate a node would only be when one is explicitly adding a node to the node graph. Any authentication would need to be done prior to adding the node to the node graph. One question is whether we should allow users to explicitly add nodes to the node manager/graph straight from the command line and therefore not have to be authenticated ahead of time. I think we should allow this as this is would primarily be used for special situations and debugging. The node manager's add node method can schedule an authentication by scheduling a ping node. I think the ping node should not just establish a proxy connection but also check the certificates which I believe the proxy does already. But you should check @tegefaulkes |
We should re-use the ping node as the basis of "authenticating a node". It would then be used when refreshing buckets and dealing with filled buckets. We could add a boolean flag during the add node method which controls whether to authenticate the node prior to adding. |
Right now the |
Hmm.. the spec I created here relates more to issue #344 but it still relates this one partly. The scope this issue relates more to authenticating and adding nodes when connecting TO a node. Whereas 344 relates to authenticating and adding a node when receiving a connection FROM a node. I hoped that I could handle adding nodes for So adding nodes based on reverse connections can be automatic but we may have to handle forward connections separately. In any case, this Issue should focus more on the authenticating the forward connections and pinging the old nodes when we need to add a new node to a full bucket. |
I reverted to the old spec. This needs to be spec-ed out focusing on authenticating and adding nodes on forward connections. the So maybe like mentioned above, we can just use the The I'll have to review code and spec this futher. |
Old specSpecificationWe currently have some instances when adding nodes to our Previously, this was intended as a primitive means of ensuring that only node IDs that corresponded with valid keynodes were added to one's own This problem relates to mitigating the impact malicious actors/keynodes have on the wider Polykey network, and keeping the network as "clean" as possible. There needs to be further discussion about how this will be considered with Polykey, with this Additional context
Tasks
|
I feel like I keep editing over issues that describe problems to turn them into design specifications. I feel that we should keep the issues that describe problems as they are. When we find a solution to the problem we should create a new design issue specifying the solution and design spec. Basically I think we should have separation between problem definitions and problem solutions.
It's just a though I had. |
tests relating to new
|
It's fine to edit issues, in fact they should be edited. We don't want to create too many issues, as old information becomes out-dated. So editing is just a natural process of pruning information and updating information. Makes it easier to do the R&D report later. |
Looking into how to implement the So we can add a May as well add optional connection timeouts here as well. Do you think this is a good course of action @CMCDragonkai ? |
Wild question, How do we actually get the host and port of the client connecting to the forward proxy? |
Added `nodePing` command to `NodeConnectionManager` and `NodeManager.nodePing` calls that now. the new `nodePing` command essentially attempts to create a connection to the target node. In this process we authenticate the connection and check that the nodeIds match. If no address is provided it will default to trying to find the node through kademlia. Related #322
As for making all of this non-blocking. I think that should be speced out in a separate issue. |
Added `nodePing` command to `NodeConnectionManager` and `NodeManager.nodePing` calls that now. the new `nodePing` command essentially attempts to create a connection to the target node. In this process we authenticate the connection and check that the nodeIds match. If no address is provided it will default to trying to find the node through kademlia. Related #322
Added `nodePing` command to `NodeConnectionManager` and `NodeManager.nodePing` calls that now. the new `nodePing` command essentially attempts to create a connection to the target node. In this process we authenticate the connection and check that the nodeIds match. If no address is provided it will default to trying to find the node through kademlia. Related #322
Need to ensure validity of nodes by pinging them before adding them to the node graph. #322
Added `nodePing` command to `NodeConnectionManager` and `NodeManager.nodePing` calls that now. the new `nodePing` command essentially attempts to create a connection to the target node. In this process we authenticate the connection and check that the nodeIds match. If no address is provided it will default to trying to find the node through kademlia. Related #322
`setNode` now authenticates the node you are trying to add. Added a flag for skipping this authentication as well as a timeout timer for the authentication. this is shared between authentication new node and the old node if the bucket is full. Related #322
Updated `NodeConnectionManager.pingNode` to just use the proxy connection. #322
`setNode` now pings 3 nodes concurrently, updating ones that respond and removing ones that don't. If there is room in the bucket afterwards then we add the new node. #322
`setNode` now has a `blocking` flag that defaults to false. If it encounters a full bucket when adding a node then it will add the operation to the queue and asynchronously trys a blocking `setNode` in the background. `setNode`s will only be added to the queue if the bucket was full. #322
`NodeConnectionManager` now takes `NodeGraph` in the `nodeConnectionManager.start` method. It has to be part of the start method since they are co-dependent. `NodeConnectionManager` cals `NodeManager.setNode()` when a connection is established. This fulfills the condition of adding a node to the graph during a forward connection. Fixed up tests that were failing in relation to the `NodeManager` `StartStop` conversion. #322
`NodeManager.setNode` and `NodeConnectionManager.syncNodeGraph` now utilise a single, shared queue to asynchronously add nodes to the node graph without blocking the main loop. These methods are both blocking by default but can be made non-blocking by setting the `block` parameter to false. #322
Need to ensure validity of nodes by pinging them before adding them to the node graph. #322
Added `nodePing` command to `NodeConnectionManager` and `NodeManager.nodePing` calls that now. the new `nodePing` command essentially attempts to create a connection to the target node. In this process we authenticate the connection and check that the nodeIds match. If no address is provided it will default to trying to find the node through kademlia. Related #322
`setNode` now authenticates the node you are trying to add. Added a flag for skipping this authentication as well as a timeout timer for the authentication. this is shared between authentication new node and the old node if the bucket is full. Related #322
Updated `NodeConnectionManager.pingNode` to just use the proxy connection. #322
`setNode` now pings 3 nodes concurrently, updating ones that respond and removing ones that don't. If there is room in the bucket afterwards then we add the new node. #322
`setNode` now has a `blocking` flag that defaults to false. If it encounters a full bucket when adding a node then it will add the operation to the queue and asynchronously trys a blocking `setNode` in the background. `setNode`s will only be added to the queue if the bucket was full. #322
`NodeConnectionManager` now takes `NodeGraph` in the `nodeConnectionManager.start` method. It has to be part of the start method since they are co-dependent. `NodeConnectionManager` cals `NodeManager.setNode()` when a connection is established. This fulfills the condition of adding a node to the graph during a forward connection. Fixed up tests that were failing in relation to the `NodeManager` `StartStop` conversion. #322
`NodeManager.setNode` and `NodeConnectionManager.syncNodeGraph` now utilise a single, shared queue to asynchronously add nodes to the node graph without blocking the main loop. These methods are both blocking by default but can be made non-blocking by setting the `block` parameter to false. #322
Need to ensure validity of nodes by pinging them before adding them to the node graph. #322
Specification
With #344 more or less solved we need a way of authenticating when adding forward connections. this includes authenticating the old connections when a bucket fills up. In both cases we can so a
pingNode
to check if a node is alive. However the current implementation ofpingNode
is insufficient for the job as it only checks if a node is alive.NodeManager.pingNode
needs to be updated to match the following criteria;The
NodeManager.setNode
needs to be updated to do the following.pingNode
. This is already done, just need to update ping for this. This will need to concurrently check alpha of the oldest connections.NodeConnectionManager
needs to be updated to;NodeManager
as part of start.NM.qeueSetNode
NodeManager
needs to be updated to;After that we need to make sure all instance where we need to add a node just calls
NodeManager.setNode
. Where this is done still needs to be speced out but this may be outside the scope of this issue. Create a new issue for this?Additional context
NodeGraph
#344Tasks
NodeManager.pingNode
to match the above specification.NodeManager.setNode
to match the above specification.pk nodes set
(with--no-ping
and--force
) andpk nodes ping
The text was updated successfully, but these errors were encountered: