Glorious Alpha Two Testers!

Alpha Two Realms are now unlocked for Phase II testing!

For our initial launch, testing will begin on Friday, December 20, 2024, at 10 AM Pacific and continue uninterrupted until Monday, January 6, 2025, at 10 AM Pacific. After January 6th, we’ll transition to a schedule of five-day-per-week access for the remainder of Phase II.

You can download the game launcher here and we encourage you to join us on our for the most up to date testing news.

[Feedback Request] Alpha Two Server Meshing Technology Preview Shown in June Livestream

135

Comments

  • JhorenJhoren Member
    Great stream! I love the tech talk and the performance improvements look awesome. Well, on paper at least :smile:
  • morphwastakenmorphwastaken Member
    edited July 7
    I enjoy problem-solving, and this preview got me curious about some aspects, even though i know nothing about this topic.

    1. What is the big difference between actor data being sent to replicate for others on the same server, compared to data sent from a server to make a proxy on another? Is the only difference in which server will handle it? And if there is other distinction - could you apply same changes to improve replication within single server as well?
    2. How do you expect the server load to be distributed between handling native actors vs proxies? I imagine borders need to be fairly wide, and with some servers being surrounded by up to 8 others i could see proxy load to be pretty high. Depends on a scale i guess, i could even see a situation when there is more proxies than native actors, but that must be past reasonable scale.
    3. If having too many players on single server is the worst case scenario, and Dynamic Gridding is there to help address that - would making areas corresponding to each server thinner and longer make any sense?
    4c9tb65nc43e.png
    Such structure could distribute server stress better to begin with, make Dynamic Gridding needed less often, but also probably less taxing when it still needs to happen. It would increase the proxy use at low server loads, but potentially reduce adjacent server count from up to 8 for square shape, down to 5 or maybe even 2 for a rectangle, if it can be thin and long enough to cover entire map height (before Dynamic Gridding).
    4. Does Dynamic Gridding happen by unloading highest player density map segment from original server onto new server, keeping what is left on the original server (or other way around), or does everything have to get transferred onto new dynamic servers?
    5. If some structure can be separated into microservices, and proxies exist - could that be pushed further to the point where everything is service based, handled by different service servers communicating relevant information, when required, between each other (one to handle all player coordinates, one to handle player stats, couple to handle damage calculations, etc.)? And then have replication servers that would handle player input to be sent to appropriate service servers, and relevant information to be delivered back to the player? If it is possible to proxy an actor from one server to another - would it be possible to proxy everything from service servers? Would that end up being too slow, or just could not work the way i imagine it?
  • IskiabIskiab Member, Alpha Two
    Great stream. As a suggestion, it looks like AoC and Star Citizen are encountering similarish problems, it might be a good idea to look into sharing information (if you think you can trust them).
  • leameseleamese Member, Braver of Worlds, Kickstarter, Alpha One, Alpha Two, Early Alpha Two
    I loved everything about it!! Great tech for a next-gen game.

    I'm just worried about the FOV. Will my client render the objects on the other server? I like when there is a lot off stuff rendered instead of waiting to it get rendered 10m away.
  • DarkestLinkDarkestLink Member, Alpha Two
    I thought the stream was really interesting. I've seen other games try to do large scale battles, but their server performance usually isn't able to handle it well. This is something that has concerned me so seeing all the steps the dev team is taking to address the issue was really comforting. One thing I found surprising was that the devs showed the server grid using squares because squares seem less efficient than hexagons. In a grid, each square is surrounded by 8 other squares (4 sides and 4 corners), but in a grid of hexagons, each hexagon is surrounded by 6 hexagons (6 sides no 0 corners). I'm really curious to know if the devs considered using other shapes.
  • palabanapalabana Member, Alpha One, Alpha Two, Early Alpha Two
    I keep hearing the term "Relevancy". What exactly is "Relevancy"? How do you determine the "Relevancy" of an Actor (entity) to the player?

    Secondly, in Star Citizen, it was said that the game servers will shut down for parts of the map where there is no user activity to reduce costs. Is Intrepid going with the same approach?

    Overall, the server meshing technology sounds good. Alas, we can only trust you that it is working as you said it would.
  • RipteyeRipteye Member, Alpha Two
    MMMO's should be on one shard like Eve Online so I am happy about Server Meshing Technology. Hopefully it works 😁
  • YueOriginYueOrigin Member
    I would love to give some feedback from france lol
    tho i dunno where the servers are located

    One day i might be able to. One day
  • JohnWynterJohnWynter Member, Alpha Two
    I'm curious about the One Realm aspect of Ashes. Will there be only one Realm per region like NA, EU, ect. or will there be further divisions such as NA West and NA East? Alternatively, really crazy out there, will there be literally just one realm that everyone everywhere plays on?

    Also thanks for the fantastic stream and hard work! Real excited to get into game when the time comes.
  • Seems good, this is the kind of thing I am expecting for many years in many games
    PvE means: A handful of coins and a bag of boredom.
  • NailnScrewNailnScrew Member, Alpha Two
    Really excited to hear about the new server tech! Presentations are improving alongside everything else. Glad the team is so passionate about making it all come together.
    Aaaaaaughibbrgubugbugrguburgle
  • Dr3amerADr3amerA Member, Alpha One, Alpha Two, Early Alpha Two
    I said it before and i will say it again.
    I really like these tech updates and it makes me want dive deeper into it.

    From what was said in the video this looks really promising. I can't wait to see it in action.
    I am relieved to see the way you think in advance and in detail of such issues and not stop at the comfort of what current technology can accomplish and instead going to the root of roots and changing things to make it scalable and viable.

    The presentation was good and easily digestible. Visually appealing to keep ppl engaged with both gameplay and charts.

    Im excited to see 250v250 and 500v500 both planned and randomly at different locations simultaneously.
    The dynamic breaking of the server to smaller servers and how it happens in real time as well as what failsafes are in place to come back from a worst case scenario both from a players perspective and Intrepids are all things that concern me.

    Keep up the good work!
  • SengardenSengarden Member, Alpha Two
    edited July 5
    How do you feel about the Server Meshing Technology Preview?

    I thought it was awesome! As a lower-level art grad now pursuing a higher-level computer science degree with hopes of pursuing game dev, particularly as someone who's generally more passionate about narrative, gameplay concepts, and visuals, it's great to hear your tech devs speaking so passionately about the work they do. It really is some insanely impressive progress they've made!

    What did you think about the presentation during this development update, and hearing more about what goes on behind the scenes of server networking?

    Well laid-out video too, the team made everything easy to understand and appreciate. I like understanding a little bit about what goes on "under the hood".

    Is there anything in particular you’re excited or concerned about regarding what was shown with the Server Meshing Technology Preview?

    Other than having a slight bit of that skeptical "I'll believe it when I see it" feeling, which really just comes from seeing devs over-promise and under-deliver all the time, I can't say I have any concerns. It was all very exciting to hear about.

    Are there similar systems you’ve seen in other games that you like or dislike? If so, please explain!

    Can't really say so. What you're all trying to do is incredibly difficult and treads a lot of new territory, I'm sure, so kudos!

    One little thing:

    I just wanted to say the art update this month was really great. In last month's feedback thread, I talked a fair amount about the seemingly incongruous nature of the art team members' visions for the game, particularly in regard to color palette. I know I'm not the only one feeling like there've been way too many super bright jewel tones making their way into pieces of gear and other places that maybe they shouldn't be, a lot of gaudy color combinations that distract from the natural beauty of the world, that sort of thing. That the game world and the people/clothing/monsters that fill it up should feel like they all belong together and create a sense of visual harmony.

    The gear that was shown off this month was just awesome. It all looked so good, particularly the renaissance looking upper body piece and the feathered cloak. The use of color on the upper body piece was perfect in my opinion - added a splash of character and personality without being distractingly bright, new-looking, and unnaturally derived. If I had to pitch a vote in favor of seeing more of a particular style in the dev years to come, it would be more of that.

    Thanks everyone! Great work!
  • JhorenJhoren Member
    JohnWynter wrote: »
    I'm curious about the One Realm aspect of Ashes. Will there be only one Realm per region like NA, EU, ect. or will there be further divisions such as NA West and NA East? Alternatively, really crazy out there, will there be literally just one realm that everyone everywhere plays on?

    Also thanks for the fantastic stream and hard work! Real excited to get into game when the time comes.

    There are probably going to be multiple realms in each timezone. At the very least at release, if the 1 million+ number registered players all try it. To start with they are going to limit the max registered accounts per server to around 15k, but slowly down the line increase that up to 50k per server.
  • Fisherman SushiFisherman Sushi Member, Alpha One, Alpha Two, Early Alpha Two
    I am extremely excited about what is to come from this technology. I have a few questions based off all the new info, mostly tech-related.

    1. Will specific high-density locations have "beefier" hardware associated with that specific server? Or is Intrepid considering using servers hosted in the Cloud for these locations? My only logic behind this would be the ability to scale up a specific server quickly with the Cloud vs Local On-Prem Servers. I know Steven had mentioned Cloud Providers but it did seem like it was mostly an on-prem solution and the Cloud would be leveraged as needed.


    2. Is there any load balancing going on between the servers? With a 4x4 grid for example, locations with dungeons, large cities, or other highly sought-after content will definitely have a higher density of players and a much larger amount of actions being sent to the server where these activities and areas are located. Will nearby servers that are maybe less populated be assisting with the larger load of another server? I feel like this could be extremely helpful when it comes to the Dynamic Gridding that was mentioned in the live stream.


    3. Lastly it was mentioned a few times that the server a player is located on will have the primary character and a nearby server will have a proxy so that when players cross from one server to the next the servers can negotiate who holds the primary. Will all nearby servers have to store a proxy of each player at all times or is this strictly dependent on some variable like the distance from the next server? I figure it would be the latter as that would be a ton of redundant data if all neighboring servers had to store a proxy.


    4. No matter what you do for the Alpha 2 keys people are going to be unhappy. I personally lean towards making it more available so the world is full, the team gets as much data as possible, and more people have access to the game to give their opinions. All of that together will hopefully lead to a better end product and I don't feel like that's a bad thing!


    Appreciate all the hard work!


  • LaetitianLaetitian Member
    How do you expect the server load to be distributed between handling native actors vs proxies? I imagine borders need to be fairly wide, and with some servers being surrounded by up to 8 others i could see proxy load to be pretty high. Depends on a scale i guess, i could even see a situation when there is more proxies than native actors, but that must be past reasonable scale.
    @morphwastaken Keep in mind proxy replication requires substantially less server processing power, because most of the calculations involving the proxy are done by the authoritative server; the replicating server should mostly just copy the numbers, unless there is interaction across the borders, and even that only requires calculation of one side of the impact of the interaction/projectile.
    I also don't think that a server will ever be surrounded by 8 smaller-grid servers without being split up itself. Seems infinitely unlikely that there wouldn't also be a lot of players inside that grid section, but still a lot of interaction across the borders.
    5. If some structure can be separated into microservices, and proxies exist - could that be pushed further to the point where everything is service based, handled by different service servers communicating relevant information, when required, between each other (one to handle all player coordinates, one to handle player stats, couple to handle damage calculations, etc.)?
    Pretty sure you only want the client to have to be connected to as few servers as possible. Switching connections rapidly and cintinuously is a huge issue for connectivity and network load. It may be possible with things like environmental information, but even that would require a lot of back-and-forth, and as they've pointed the out, they've already optimised the data requirements for map asset information to the bare minimum by hybernating inactive assets.

    Overall good inspiration, but I think one more thing to keep in mind is that the bigger question is always going to be reducing the initial load per player/actor objects in the first place. Just throwing more servers at it would quickly become a cost issue.
    The only one who can validate you for all the posts you didn't write is you.
  • pfannepfanne Member
    I also enjoy problem solving, so here is my take on what you have shown in the video.
    Fundamentally, the problem of how many players can be in one place is twofold: simulation and replication.
    Intrepid showed in their video that they are struggling with getting replication to scale well. In the latter half I will propose a solution to alleviate the problem.

    Simulation:
    - Simulation at worst scales quadratically with the number of entities in a certain space.
    - The formula would be roughly: sim_tick_rate * entity_count^2.
    - However, usually the distance between two entities needs to be really small for them to actually interact, so it is more of a collection of simulation clusters.
    - So, actually, the formula would be something like: sim_tick_rate * num_clusters * entity_count_per_cluster^2.
    - The computation load for this can be nicely handled by splitting the clusters along spatial lines and spreading clusters across interlinked servers (basically your server meshing system with actor/proxy).
    - Additionally, non-interacting systems can be offloaded onto microservices, etc. (guild system, combat physics, trade system).
    - Here, I am stating nothing new that was not shown in the video.
    - With how you presented the video, it seems like your bottleneck is currently not with simulation but with replication.
    - Probably because game engine developers have put way more work into improving simulation speeds for massive simulations than massive multiplayer (parallelized hierarchical collision detection, parallelized entities component systems, etc.).
    - Here, I don't think I am stating anything new that wasn't shown in the video.

    Replication:
    - Replication, similar to simulation, also scales quadratically, though a bit differently for your architecture.
    - The formula would be something like: net_tick_rate * (npcs + clients) * (neighbor_servers * clients).
    - Additionally, there is a difference here between computation load and network load.
    - And just like with simulation, the replication in games only needs to happen over a limited spatial area.
    - But since visual information travels faster and farther than simulated bodies, the size of these clusters is much bigger; therefore, splitting them onto different servers is harder/impossible to do in smaller areas.
    - You tried to address this problem in multiple ways:
    * Since simulation and replication happen on the same server, server meshing will help with distributing computation and network replication load as well.
    * You optimized which entities need to be replicated (line of sight tests, range tests, dormancy, etc.).
    * You optimized how often entities actually need to be replicated (relevancy based on threat, range, dormancy, etc.).
    * Optimized custom implementation with multithreading.

    Here is where I also propose or rather reintroduce proxy-only-servers/replication servers:
    - Instead of replicating all relevant entities to all connected players, the meshing system could spawn new servers that only contain proxies.
    * The proxy servers would overlap the spatial area covered by regular servers but perform no simulation.
    - Network topology under heavy load would change from:
                   stressed
        server a   server b   server c
           |           |         |
           +-----------+---------+----...
           |           |         |
         +-+-+       +-+-+     +-+-+
         | | |       | | |     | | |
        clients     clients   clients
    
    - To this:
                                 no longer stressed
        server_a                     server_b                    server_c
           |                             |                          |
           +-----------------------------+--------------------------+----...
           |                             |                          |
           |             +-self_pc_repli-+                          |
           |             |               |                          |
         +-+-+           |        +------+-------+                +-+-+
         | | |           |        |      |       |                | | |
        clients          |  proxy_b1  proxy_b2  proxy_b3         clients
                         |        |      |       |
                         |      +-+-+  +-+-+   +-+-+
                         |      | | |  | | |   | | |
                      even moooooooooooooooooore clients
    
    - Self_pc_replication means the server b would still replicate the information of the players' own character to each player to reduce latency for your own character.
    - This would change the load on server b to this:
    * net_tick_rate * (clients + (npcs + clients) * (neighbor_servers + proxy_servers))
    - The proxy servers each would have a load of:
    * net_tick_rate * (npcs + clients - 1) * proxy_clients

    Example calculation:
    Number of players: 1000
    Number of NPCs: 1000
    Network ticks: 10/second
    Neighboring connected servers: 4
    Proxy servers under load: 3
    Maximum server capacity: 10,000,000 U/s (units per second) = 10,000 kU/s (kilo unit per second) = 10 MU/s (mega unit per second)

    Let's just call the unit of computation needed to know if a player needs information about an NPC or other player and the serialization of that data CU.
    And the unit of network bandwidth for sending information about a player or NPC is called NU.
    In a worst-case scenario, each CU would create an NU, because everybody can see everybody.
    Let's combine these units into a generic unit called U.

    With no proxy servers, the estimated server load would be:
    - net_tick_rate * (npcs + clients) * (neighbor_servers * clients)
    - 10/s * (1000 + 1000) * (4 + 1000) = 20,080,000 U/s = 2080 kU/s = 20 MU/s
    => This would be twice as high as the actual server computation capacity.

    With proxy servers, the load on the regular server would be:
    - net_tick_rate * (clients + (npcs + clients) * (neighbor_servers + proxy_servers))
    - 10/s * (1000 + (1000 + 1000) * (4 + 3)) = 150,000 U/s = 150 kU/s = 0.15 MU/s
    - The CU would actually always be lower than NU in this case, because data will only need to be serialized once for all proxies, because they receive the exact same data.
    And the load per proxy server:
    - net_tick_rate * (npcs + clients - 1) * proxy_clients
    - 10/s * (1000 + 1000 - 1) * 333 = 6,656,670 U/s = 6,657 kU/s = 6.7 MU/s
    => The addition of 3 proxy servers would allow 1000 players to be playing together in a single spot.

    Benefits:
    - This topology allows splitting the quadratic load of replication evenly across multiple proxy servers with almost linear scaling.
    - At the cost of additional latency, the proxy system could be cascaded into more layers when NU load to replicate to X number of proxies exceeds the NU capacity of the regular server.
    - With the main server capacity reduced to only 0.15 MU/s, it could still handle the additional replication load of a proxy server, reducing the actual number of needed proxies by 1.

    Some caveats:
    - I pulled these numbers out of my ass.
    - With actual numbers, this topology change may not even be worth it.
    - There is added latency for other players because the data needs to be replicated an additional time for each layer of proxies instead of only once to arrive at the player.
    - Overall load still grows quadratically and will probably still consume the computing capacity of the whole planet with a million players in the same spot.

    Thanks for listening to my TED talk.
  • LaetitianLaetitian Member
    edited July 6
    pfanne wrote: »
    Here is where I also propose or rather reintroduce proxy-only-servers/replication servers:
    Pretty sure that doesn't work for the same reason I stated above: You're forcing the players to be connected to multiple different servers simultaneously feeding them high-frequency data inputs.
    I think that's just not how fast-paced networking with dozens of ticks per second is done. You're asking the network to do way too much switching for each client per second. I think especially on the server network side, and perhaps also on the clients' modems' side? (Though I imagine if the game had like 10 players there might be a way to do what you're proposing...?)

    Can't say for sure, and your ideas in general are nice to look at, but I think there's just a hard limit there. Maybe one day computing will change to the point where that sort of multi-connection is realistic, but I don't think that's a realistic approach for massive networks like MMOs yet. For now I think it's essential that there's a stable connection for the majority of the high-frequency data transfer.
    I mostly mention it, in case this makes you think about an alternative way of realising your idea.
    The only one who can validate you for all the posts you didn't write is you.
  • BeardyBeardy Member, Alpha Two
    leamese wrote: »
    I loved everything about it!! Great tech for a next-gen game.

    I'm just worried about the FOV. Will my client render the objects on the other server? I like when there is a lot off stuff rendered instead of waiting to it get rendered 10m away.

    From what they said on the stream, the intent is server meshing should not affect how or where objects render in. That would be a different setting. I'm sure there are or will be rendering bugs related to that rendering that need to be worked out.
  • BeardyBeardy Member, Alpha Two
    I absolutely loved the tech heavy livestream. The team did an amazing job showing off a quite complex topic in a way that was easy to understand and enjoy. The quality of the presentation was amazing.

    I would love to see more tech heavy demonstrations. Even if a full live stream with the whole team doesn't work a shorter video with the devs, or even just a forum post with some quality slides would be great. Video games in general are huge, and Ashes is massive. Showing off the complexity of the systems people don't usually get to see would be great.
  • FiddlezFiddlez Member
    I am definitely not knowledgeable enough to give feedback but It was definitely interesting.

    Will it allow for more players per server? Can we expect another siege stream soon or post A2 launch to show this technology off?
  • RawblinRawblin Member, Alpha One, Alpha Two, Early Alpha Two
    Happy with everything that was shared in the stream!

    I would simply ask if server redundancy/load sharing/backups has been considered? I'm not sure what the technical term would be. (I'll edit this term if someone tells me)

    Premise: You all stated that when one server goes down, people connected to that are dropped but the rest of the realm goes on without issue. Then said server starts back up and people can come back and resume gameplay.

    Example: What if said server goes down, but the four (eight?) adjacent to it, which already have its information as discussed in the stream, instantly take that load to keep all the players in the downed server in the game and playing. Then the original server that went down is spun back up, and people in that area are again transferred to said original server.

    I realize this is probably a whole heap of extra extra on top of what is already going on behind the scenes, especially when considering the goal is to have it be seamless for the playerbase. But I think it is worth a look, as if possible it would make the gameplay experience even more amazing, as only cascading server errors would ever cause anyone to disconnect.

    Thanks!
    mzp54yrh7ywu.png
  • Keyb1nd_Keyb1nd_ Member, Alpha One, Alpha Two, Early Alpha Two
    edited July 7
    Can't wait to test this out :)
  • norjjnorjj Member, Alpha Two
    edited July 6
    On the limitations of dynamic grid/meshing or Asmongold isn't completely wrong (yet):

    With the recent confirmation from Steven that AoC is still targeting 10k concurrent players per shard, I thought it would be interesting to discuss the bottlenecks
    preventing higher player populations (20k, 30k, or "One Realm").

    The first and most obvious is latency. This is a physics problem and I don't expect the AoC team to solve FTL travel. Shards must be split by geographical distances to maintain client server performance. Think NA East, NA West, EU, etc. Let's consider subs in NA East: If sub count ever exceeds 10k concurrent users on NA East (think large queue times), an additional NA East shard would be required. This fractures a playerbase that could theoretically exist on the same shard. So let's frame the discussion with a couple assumptions:

    1. There will be regional based sharding and "One Realm" is not feasible
    2. AoC will be popular enough to exceed the 10k slots in a single region

    With that out of the way, I'd like to talk about the feasibility of increasing player counts for identical regions and what prevents 50,000 players from playing a single NA East shard.

    The first possible limitation is the architecture of dynamic gridding itself. We don't know if the system is even designed for nested fractures. The demo shows a portion of the map and the server handling that region split into 3 sections requiring 2 additional virtual machines to spin up and handle the partitions. This could be a self imposed architectural limitation but I assume at least one additional partition can take place which splits the region into 4 parts. This requires 3 virtual machines to handle the load and splits the region into equal 1/4 squares. We don't know if those newly created virtual machines can further sub-divide into say 8 virtual machines handling what was previously handled by a single VM.

    The second limitation is gameplay. We don't know if the gameplay systems/loops/points of interest/map have been designed or scale beyond the initial goal of 10,000 concurrent players. Think towns, banks, spawners, etc. becoming too populated to be usable. This cap might actually be harder to overcome than the previously discussed possible limitation on dynamic meshing. This limit is purely a function of developer hours. We know the world is huge but are the systems designed to scale beyond 10K players?

    The third limitation harder to quantify. If we assume 500 vs 500 is the technical goal of dynamic meshing, what is the probability of 10,000 players splitting off into a 500 vs 500 battle? Does doubling or tripling the number of concurrent players increase the odds that battles would exceed the self imposed 500 vs 500 architecture? Food for thought.
  • [SERVER MESHING & REPLICATION LAYER BOTTLENECKS COMPARED WITH OTHER GAMES]

    There have been other games in the past and in development that have had some sort of server meshing type tech. i.e. Dual Universe, Atlas and namely Star Citizen (Still in Development).

    I would like to know from an Intrepid Developer's standpoint, how was the tested Replication Layer (which was found to be a needless bottleneck) compare to Star Citizen's upcoming implementation of a Replication Layer?

    Of course, scale, engine and focus is different between both games. However, it seems there's also enough similarity to consider the pros/cons of a Replication Layer for each game.

    I think it would be interesting to compare the two systems, in particular the Replication Layer and services so as to understand challenges in both systems.

    Can any Intrepid Developer / Network Guru pipe in to help elaborate or even offer some theory and speculation(with disclaimer) on this topic further?
  • KrystalKittenKrystalKitten Member, Leader of Men, Kickstarter, Alpha One, Alpha Two, Early Alpha Two
    Ok so the stream went COMPLETELY over my head and I had no idea what anyone was saying.
    However
    I found it extremely fascinating and really what I have been wanting from an MMORPG ever since I first started playing them. One Server where we can see everyone in the world, not having to server hop to find friends and do things, just its all right there and I am in love with it!
  • SivienSivien Member, Alpha One, Alpha Two, Early Alpha Two
    I thought the stream went very well! Looking forward to seeing this tech in real time!
  • patrick68794patrick68794 Member, Alpha Two
    edited July 7
    Zer0Kelvin wrote: »
    [SERVER MESHING & REPLICATION LAYER BOTTLENECKS COMPARED WITH OTHER GAMES]

    There have been other games in the past and in development that have had some sort of server meshing type tech. i.e. Dual Universe, Atlas and namely Star Citizen (Still in Development).

    I would like to know from an Intrepid Developer's standpoint, how was the tested Replication Layer (which was found to be a needless bottleneck) compare to Star Citizen's upcoming implementation of a Replication Layer?

    Of course, scale, engine and focus is different between both games. However, it seems there's also enough similarity to consider the pros/cons of a Replication Layer for each game.

    I think it would be interesting to compare the two systems, in particular the Replication Layer and services so as to understand challenges in both systems.

    Can any Intrepid Developer / Network Guru pipe in to help elaborate or even offer some theory and speculation(with disclaimer) on this topic further?

    Star Citizen's replication layer is a suite of scalable microservices that handles inter-server replication. Basically an instance of the microservice suite handles replication between a specific number of servers and can interact with other instances of the replication layer when needing to replicate data to servers outside that instance's managed servers (like when traveling from one star system to another for example).

    They will also have the ability for large ships (and theoretically even small, single seat ships but that will likely never actually happen) to be their own servers so they need to be able handle authority for a server that will be dynamically changing what other servers it shares boundaries with so being able to pass that between replication layer instances instead of updating which servers it shares boundaries with directly is likely more efficient. This isn't something Ashes needs so it's probably just unnecessary added complexity in their case to have a dedicated replication layer
  • Zer0KelvinZer0Kelvin Member
    edited July 8
    Zer0Kelvin wrote: »
    [SERVER MESHING & REPLICATION LAYER BOTTLENECKS COMPARED WITH OTHER GAMES]

    There have been other games in the past and in development that have had some sort of server meshing type tech. i.e. Dual Universe, Atlas and namely Star Citizen (Still in Development).

    I would like to know from an Intrepid Developer's standpoint, how was the tested Replication Layer (which was found to be a needless bottleneck) compare to Star Citizen's upcoming implementation of a Replication Layer?

    Of course, scale, engine and focus is different between both games. However, it seems there's also enough similarity to consider the pros/cons of a Replication Layer for each game.

    I think it would be interesting to compare the two systems, in particular the Replication Layer and services so as to understand challenges in both systems.

    Can any Intrepid Developer / Network Guru pipe in to help elaborate or even offer some theory and speculation(with disclaimer) on this topic further?

    Star Citizen's replication layer is a suite of scalable microservices that handles inter-server replication. Basically an instance of the microservice suite handles replication between a specific number of servers and can interact with other instances of the replication layer when needing to replicate data to servers outside that instance's managed servers (like when traveling from one star system to another for example).

    They will also have the ability for large ships (and theoretically even small, single seat ships but that will likely never actually happen) to be their own servers so they need to be able handle authority for a server that will be dynamically changing what other servers it shares boundaries with so being able to pass that between replication layer instances instead of updating which servers it shares boundaries with directly is likely more efficient. This isn't something Ashes needs so it's probably just unnecessary added complexity in their case to have a dedicated replication layer

    Right, but it will be interesting to see how SC avoids any bottleneck (if any) with their replication system.
    Static server meshing will come in first, but eventually the goal is Dynamic server meshing.
    I'm a bit concerned about what type of redundancy will be built in for the Replication Layer in SC, incase replication itself goes down.
    But I'll stop here. This isn't an SC forum. Just hoping to learn the pros/cons of both Ashes and SC's implementation of this tech.
  • adlezadlez Member
    small feedback from watching a previous Update video;
    - Emitted Lights from fires should flicker
    - Spells should emit light, i saw only one where turning the wand to ICE had light emit from it for a split second
    personal request/idea: please add filters in the games visual options!
Sign In or Register to comment.