Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Akka.Remote.EndpointDisassociatedException: Disassociated #6869

Open
beginner0925 opened this issue Aug 3, 2023 · 6 comments
Open

Akka.Remote.EndpointDisassociatedException: Disassociated #6869

beginner0925 opened this issue Aug 3, 2023 · 6 comments

Comments

@beginner0925
Copy link

Version Information
All versions

Describe the bug
use cluster sharding,after running for a period of time, the seed node or other nodes will have this exception, resulting in the failure to connect to the seed node

Environment
Windows

Additional context
Akka.Actor.OneForOneStrategy || Disassociated || Akka.Remote.EndpointDisassociatedException: Disassociated
at Akka.Remote.EndpointWriter.Unhandled(Object message)
at Akka.Actor.UntypedActor.Receive(Object message)
at Akka.Actor.ActorBase.AroundReceive(Receive receive, Object message)
at Akka.Actor.ActorCell.ReceiveMessage(Object message)
at Akka.Actor.ActorCell.ReceivedTerminated(Terminated t)
at Akka.Actor.ActorCell.Invoke(Envelope envelope)
||end

@Aaronontheweb
Copy link
Member

I'm sorry, but we're going to need more details here - can you give us a timeline of what happened exactly? This sounds like a config and ops issue on your end, not a bug with Akka.Remote.

@beginner0925
Copy link
Author

akka {
log-dead-letters = off
log-dead-letters-during-shutdown = off
loglevel = INFO
loggers=["Akka.Logger.Serilog.SerilogLogger, Akka.Logger.Serilog"]
extensions=[
"Akka.Cluster.Tools.PublishSubscribe.DistributedPubSubExtensionProvider,Akka.Cluster.Tools",
"Akka.Cluster.Tools.Client.ClusterClientReceptionistExtensionProvider, Akka.Cluster.Tools"]
actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
serializers {
hyperion = "Akka.Serialization.HyperionSerializer, Akka.Serialization.Hyperion"
}

	serialization-bindings {
        "System.Object" = hyperion
    }
}
remote {
    #log-remote-lifecycle-events = on
    dot-netty.tcp {
        port = 0
        hostname = "$hostname$"
        send-buffer-size = 512000b
        receive-buffer-size = 512000b
        maximum-frame-size = 256000b
    }
}
cluster{
    name = "$cluster-name$"
    seed-nodes = [$seed-nodes$]  # address of seed node
    roles = ["api-node"] # roles this member is in
	#auto-down-unreachable-after = 5s
    sharding {
        #fail-on-invalid-entity-state-transition = on
		#state-store-mode = ddata
        #remember-entities-store = ddata
        #remember-entities = on
        #least-shard-allocation-strategy.rebalance-threshold = 3
        #passivate-idle-entity-after = 60s
        #distributed-data.durable.keys = []
        #journal-plugin-id = "akka.persistence.journal.sharding"
        #snapshot-plugin-id = "akka.persistence.snapshot-store.sharding"
	}
    split-brain-resolver {
        active-strategy = keep-majority
    }
}
persistence {
    publish-plugin-commands = on
    journal {
		plugin = "akka.persistence.journal.redis"
		redis {
		    class = "Akka.Persistence.Redis.Journal.RedisJournal, Akka.Persistence.Redis"
		    plugin-dispatcher = "akka.actor.default-dispatcher"
			configuration-string = "$persistence-redis-configuration$"
            database = 1
            key-prefix = "apiNode:"
		}
        sharding {
            class = "Akka.Persistence.Redis.Journal.RedisJournal, Akka.Persistence.Redis"
            configuration-string = "$persistence-redis-configuration$"
            database = 1
            key-prefix = "apiNode:"
        }
	}

	snapshot-store {
		plugin = "akka.persistence.snapshot-store.redis"
		redis {
            class = "Akka.Persistence.Redis.Snapshot.RedisSnapshotStore, Akka.Persistence.Redis"
			plugin-dispatcher = "akka.actor.default-dispatcher"
            configuration-string = "$persistence-redis-configuration$"
            database = 1
            key-prefix = "apiNode:"
        }
        sharding {
            class = "Akka.Persistence.Redis.Snapshot.RedisSnapshotStore, Akka.Persistence.Redis"
            configuration-string = "$persistence-redis-configuration$"
            database = 1
            key-prefix = "apiNode:"
        }
	}
}

}

@beginner0925
Copy link
Author

var shardRegion1 = await clusterSharding.StartAsync(
typeName: ActorTypeNames.RawDataHandlerNode,
entityPropsFactory: e => Props.Create(() => new RawDataProcessActor(_actorManager, _serviceProvider, e)),
settings: ClusterShardingSettings.Create(_actorSystem).WithRole(Roles.RawDataSharded),
messageExtractor: new HashMessageExtractor());

@beginner0925
Copy link
Author

This is all configuration, sending data is very unstable, it is very easy to get this exception, and then the node will stop, I mainly use the cluster singleton

@beginner0925
Copy link
Author

case Terminated t:
{
if (_reader == null || t.ActorRef.Equals(_reader))
{
PublishAndThrow(new EndpointDisassociatedException("Disassociated"), LogLevel.DebugLevel);
}

                break;
            }

Why is an exception thrown here
Akka.Remote/Endpoint/EndpointWriter 1271 line

@Aaronontheweb
Copy link
Member

This is all configuration, sending data is very unstable, it is very easy to get this exception, and then the node will stop, I mainly use the cluster singleton

Akka.Remote runs fine in thousands of live environments all over the world - this is an issue with your environment. Dissociations are normal - they occur when TCP connections are disrupted or nodes are shut down. Do you have any logs you can share?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants