It's quite interesting in how many different ways people thought of in terms of representing the data in computer science. Most of the products we end up using nowadays are based on technology that first came in the early '70s — I speak of relational databases, Sequel and work that was happening at the time at IBM. It doesn't mean that there weren't other back then, but SQL is still a technology that lives to this day.
I write about this because today at work, we had thought about how cool it would be to model something in the graph-based database. Pretty much what we wanted to model is relationships between different rows in the table — which is precisely what graph databases are all about. We didn't end up to do any work since the return on investment would not be sufficient to take such big of job — we don't use any graph database right now.
It also reminds me a little bit about object-oriented databases. I never used one, but I heard a lot about it that they were the "big thing" back in the '90s. Main value proposition if I remember correctly was that it would ideally map data from the database to programming language since object-oriented programming languages like C++ or Java were starting getting traction. To me, it seems like an over-simplification — I don't want to have an issue of mapping data, so I will make sure that no mapping process is needed. I never saw a problem with SQL statement in codebase if it was more readable than object-relational mapper magic — which is quite often the case in terms of more complex queries.
Then there is also the whole movement of NoSQL databases. I used some of them like DynamoDB, Redis or Elasticsearch, and each of them has its uses. I don't see them replacing SQL databases anytime soon since I don't think people like implicit schemas that some of them are using. By implicit schema, I mean a specific structure of data that isn't previously defined.