Greetings, glorious testers!
Check out Alpha Two Announcements here to see the latest news on Alpha Two.
Check out general Announcements here to see the latest news on Ashes of Creation & Intrepid Studios.
To get the quickest updates regarding Alpha Two, connect your Discord and Intrepid accounts here.
Check out Alpha Two Announcements here to see the latest news on Alpha Two.
Check out general Announcements here to see the latest news on Ashes of Creation & Intrepid Studios.
To get the quickest updates regarding Alpha Two, connect your Discord and Intrepid accounts here.
Comments
As for redundancy, they have a scalable graph database that handles all of the persistence for the game that keeps a real time image of the game world and their replication layer services and a separate "long term" database that is written to regularly, which I'm assuming is backed up regularly as well. The chances of everything crashing are slim to none but in case the replication layer crashes, the graph database will have an almost current state to restore to once the new replication services are started, and if both of those somehow crash then there is still going to be a recent world state to restore from their long term database after the graph DB and replication layer services are restarted.
The main con for SC's approach is the overall complexity stemming from needing to be able to not only scale both out and up but to also dynamically "move" the location that a server is responsible for between different server boundaries. The main pro is that with how decoupled all of their services are designed to be it should be very easy and quick to recover from most failures with little risk of losing any data.
AoC should technically have marginally less latency since the servers are communicating directly and the main con I can see for AoC's approach, outside of potential performance issues caused by Unreal not being designed for this, is that since the servers handle replication directly then if something goes wrong with replication (data corruption, dropped packets, etc.) then the servers that are involved with that specific replication task could run into a deadlock or crash if those cases aren't handled properly or if there's some unusual edge case that was missed (and yes, there will be missed edge cases in something as complex as this). And also even though they rewrote a lot of Unreal's replication graph code and made it multithreaded, there will be performance issues after going live that they couldn't foresee with internal tests, even stress testing with bots.
Elder Scrolls Online does a similar method for netcode especially in open world PvP. One major issue they encountered is Damage over Time skills. When thousands of players are applying 100s of DoTs across a choke point you start getting astronomical damage calculations going on.
They've tried to bandaid this by changing most DoTs to calculate and "hit" every 2 seconds instead of every second.
What I'd suggest exploring to mitigate this issue from the start is every DoT effect added to a player (even from the same skill) have a damage tick rate that isn't uniform.
One way you could do this is have a varied assignment style.
DoT 1 applied every second at 1x damage
DoT 2 applied every 0.3 seconds at 0.3x damage
DoT 3 applies every 0.7 seconds at 0.7x damage
This could very minimally impact damage and you'd want someone mathematically inclined to find rates that don't overlap often or at all.
This DoT assignment style would break up damage calculations across different ticks and alleviate the load of 100s of players applying 100s of DoT instances each.
1 DoT calculation is basically free processing-wise. 100 players applying 2 or 3 AoE DoTs to 100s of players becomes insane (100 x 2 DoTs x 100 players in a chokepoint = 20,000 damage calculations per tick)
nice
Why would you intentionally create points where 4 servers have to communicate proxy information to one another when you can just as easily tile the game grid with servers that never come to more than 3 at a point? Is there a technical reason for this? What about servers of different sizes for that matter?
This almost looks like its considered during the "Dynamic Gridding" portion of the video, but I don't see any reason why you couldn't do Dynamic Gridding with the "brick-layering" geometry shown above.
If you really wanted to, you could create long thin servers at different game "latitudes" that span the entire globe, reducing the number of adjacent servers at any point to 2. (But that involves different server areas which may not be supportable given the game's size)
During the Spatial Grid Bucket, the number of servers is directly referenced as the reason for a performance boost, using 3 adjacent squares looking towards a corner. Again, if you glide half of the servers (as above), you would always be using 2 adjacent squares.
The preview was well planned out and the presentation made it clear what to expect when all the pieces come together.
Neither excited or concerned at this time - Will reserve detailed comments until we see it implemented and working properly.
Have only played one other game with similar system - Star Citizen. There is is nice to know that there will not be large server loads in my little slice of the galaxy even if there are numerous large fleet battles constantly erupting on the other side of the planet - the server will only send my client the data it needs to keep my gaming experience as smooth as possible. Expecting the same from IS' server tech.
The team did an exceptional job on presenting it as clear as possible.
I find it really impressive that Intrepid has this quality of innovation in its ranks. Great job overall and I look forward on the continue working of it to get to that point of optimization target for 500v500!
Keep up the outstanding work Intrepid =^_^=
But if it's not ppt like presentation will be better.
I prefer to see tech demo in game that going in real time even it's not working perfectly, what I want to know is what you aim for and how it looks right now like before after comparison and what the hard parts are, just share what you can share with players.
I couldn't comment one way or another about what the best server solution would be, I'm just thankful you guys are willing to show us stuff like this.
It's an interesting and refreshing change of pace every once in a while.
In any case, knowing how much work it takes to even get 100 people on screen at once makes it a lot easier to understand and forgive little bugs or network hitches.
Sounds like Patrick68794 is a SC dev trying to promote his system....
Anyway, as a non-tech guy (Construction) I very much enjoyed the presentation, and deep dive into how this works, and how/why it has been an issue in the past.
All I knew was...zoning...it was just a thing. Your running along the road, then your frozen mid-stride, then continue on
There was a lot of concern this deep dive would turn off guys like me, but honestly it was cool seeing the behind the scenes, and how they are working on preventing lag/bottlenecking.
All good, lots of love...except to haters > Patrick/SC clone
Later
Lol so pointing out that server meshing isn't in SC yet makes me an "SC clone" and means I somehow hate AoC I guess? What a dumb conclusion to jump to based on that post, though I guess maybe I shouldn't be surprised
If you actually understood what you were reading you'd see that if anything, my posts are just showing confirmation that this kind of tech can work in general and that they're an endorsement for Intrepid's claims. I know understanding what you're reading before replying is tough for some people but give it a shot next time
Your posts try to say that Intrepid's can work...but not as good as SC. You are all about promoting SC in all your trash talk. Maybe you should go play with your little friends in a corner and stop trying to talk down to ppl that call you out.
You are clearly a little troll, and to say otherwise is to belittle even your little brain.
Next....
They do not. Learn to understand what you're reading before making a fool of yourself. Not once have I said or implied that the meshing tech in SC is "better" nor have I "promoted SC" in any of my posts lol all I've done is point out the differences and even provided a few potential pros and cons that stand out to me as being likely (unlike you, I do have experience and knowledge in this area). Neither is better than the other from a technological standpoint and there's no way of knowing if either approach is actually going to work better than the other. They're both groundbreaking new tech and they will both have their teething problems, same as any other cutting edge technology. Now if you could stop acting like a child and pretending to be a keyboard warrior, why not discuss the actual showcase and the technology?
This could be the single most advancing technology for Unreal Engine which if done absolutely flawlessly, no one will ever know how hard this was, what it took, only that we can enjoy the game at scale we've not been able to do in prior games. This could change how any sandbox/mmo style games are built infrastructurally. Hats off to the engineers and developers who I'm sure spent countless hours with "it can't be done" staring them in the face.
What did you think about the presentation during this development update, and hearing more about what goes on behind the scenes of server networking?
I appreciate it as I'm in tech and understand the amount of effort that engineers and developers spend. This is one of those moments where you all just made the most introverted people who probably don't "toot their own horn" feel honored and proud of the work they put into this feature/infrastructure. Seriously, this is very impressive and you all should continue do these style updates.
Is there anything in particular you’re excited or concerned about regarding what was shown with the Server Meshing Technology Preview?
I do think people might try to overload the tech, and organize massive "blitz" runs, only because you all showed how the system will add more servers and sub-divide zones, I'm sure just because, people are going to test out at scale, the system.
While you all showed this off and it's really awesome, what happens if 100, 1000, 10,000, 100,000, 1,000,000 people run into one area. Honestly major towns, hubs, in-game are going to be this, will this all still handle that load?
Are there similar systems you’ve seen in other games that you like or dislike? If so, please explain!
I don't love server shards/instance, this is just a pain to transfer around to find your friend who's in the same place. This at times can really suck if you have to constantly manage instances, transfer around, etc.
FF14 probably has the best transferring from not only same "worlds" but also across data centers, these take about 5-10 seconds between worlds, and about 20-30 seconds between data centers. I don't love this - but at least it's doable.
I wholeheartedly agree with your first paragraph, I would love to see more studios that are making groundbreaking improvements to existing off the shelf tech allow that to be integrated into future commercial iterations so that it can be built upon. Lumen is likely the result of The Coalition doing this with their signed distance field solution for ray traced shadows (this is the exact same approach Lumen uses by default for global illumination and shadows) in Gears 5 that was so efficient it ran on the original Xbox One. The Coalition worked directly with Epic on UE5 and on some of the initial tech demos showing off Lumen and other new tech like Nanite.
Multithreading the replication graph would be huge for all of the developers that are trying to build complex online games in Unreal. If anything makes it back to Epic from AoC's development I think it should be this. I can also understand Intrepid wanting to keep the actual meshing tech they've developed in-house and proprietary though as it gives them a unique edge compared to other traditional MMOs, but also shows other studios that it is a viable solution. One interesting thing I noticed is that Intrepid spent a good amount of time talking about how they came up with a solution to make actors in the world "dormant" when it came to the replication graph update but this functionality already exists in Unreal (and is actively used already in games) so I'm curious if Intrepid built their own approach that works differently.
As of now at least they're not actually increasing player counts for a "realm" (equivalent to FFXIV's worlds) so there will still be an upper bound (I believe around 10k) on the number of clients that an entire mesh will need to handle. I'm pretty sure they already had server meshing in mind when they initially announced the player count per realm honestly. What I'm curious to see is how they'll handle that at the client level because no current PC is going to be able to handle rendering 10k other player controlled characters with the level of fidelity that is on display in AoC with the level of performance needed for action combat.
PvX OCE/SEA Guild
Discord https://discord.gg/ea9NwHhMTc
https://www.amnestyguild.net/jointheguild
So in that regard I just hope that the developers of Ashes have counted that in and are not surprised (like all the other MMOs) if their servers do not work the way they intend even after x amount of Alphas and Betas. Hope precautions are on, so we'll see. I always hope the best until the worst happen ^^
Good luck!
The tech side of MMO's is interesting as it lets us get a feeling if we're getting an instance based mmo or maybe small population "channels" type of mmo etc.
I'm concerned that the tech might not work as well when we introduced latency, disconnects, chat spamming, skill spamming etc. to the mix.
The presentation itself was enlightening, providing valuable insights into the complexities of server infrastructure and networking. It's great to see the transparency in sharing these behind-the-scenes details, as it reinforces trust and excitement among the community.
Looking ahead, I'm particularly excited about the potential for large-scale battles and events with this technology. The ability to seamlessly transition between different server instances without disruptions is something I've been eagerly anticipating in MMOs. This could truly redefine what's possible in terms of player-driven narratives and emergent gameplay.
In terms of similar systems in other games, I've seen attempts at similar technologies, but none have been as ambitious or promising as what I've seen from Intrepid Studios. The attention to detail and commitment to making server meshing a cornerstone of the next-generation MMO experience is commendable.
Overall, I believe that integrating Server Meshing Technology into Ashes of Creation is not just a step forward, but a leap towards defining the future of MMOs. I look forward to seeing how this technology evolves and enhances our adventures in Verra.
Thank you for pushing the boundaries and striving for excellence. I'm excited to be a part of this journey with you all.
I feel like I'm in a "show me" phase. I want to believe everything Intrepid is saying is true, that they've cracked the code on how to truly have a seamless, open world experience where we don't experience crashes, I just feel like in order to 100% believe it, I'm going to have to get my hands on A2 and actually get into Verra to fully believe it.
I'll be honest, I didn't care about alot of it. To me, it's Intrepid's job to make the vision a reality, so if they create something, I do my best to trust that it works. Again though, not fully believing it until A2.
I'm curious how instances factor into all of this, specifically the minimal raid instances.
Nope, I've really only ever played SWTOR seriously, and that game is a joke compared to AoC.
My biggest concern was that the mini services sound a lot like a miniature, split up and virtual version, of the distributor that was criticized so heavily at the beginning of the stream. I would like to know why this is so different.
As the images shown looked great and were accurate they also portrait it a lot in aoc favor.
Before it was a bunch of servers connected to the distributor which in turn was connected to the WAN.
Now we have a bunch of servers connected to the WAN and also connected to a bunch of distributors (one for each service) which in turn are connected to the other servers (LAN).
I get the advantages we get out of this but there is still the one point failure in a fully interconnected system.
If on my server the crafting system is down, it's due to one distributor server not running. Why can the distributing server of my neighbor game server not temporary take over this task?
Isn't that one of the advantages we have from all the servers being interconnected?
It would probably be slower on both servers now, but at least crafting is still working.
It's all virtual anyway, I assume, so why not?
This Power Point presentation gave me "This meeting could have been an email" vibes. I know that what gets people excited to buy or play a game is gameplay, core mechanics, etc. Of course people will be happy if you crack the server meshing thing and I do appreciate that it's a bear to figure out and all the work that's gone into it.
I just don't need to know how the sausage is made.
How are you protecting against these types of exploiters? Or maybe not how but are you testing for it?
Thanks