Perhaps we can go top down - write smart analysers to dynamically denorm data based on usage patterns; indeed database optimisation is an industry sector. But another, dumber, option is bottom up - avoid the initial structural 'typing' step and normalise where necessary....Normalisation can be done later on, based on demand.
I don't even want to design database schemas. To hell with modeling. I just want a system that takes a great, undifferentiated pile of facts, and infers entities based on the statistics, primarily of the content, but also on the access patterns. Actually I don't even care that it infer entities; that's an implementation detail I don't need to know about. It would just optimize for query time, or insert time, or some combination; inferring some table structure will probably help it do that.
Why do I need to tell the computer the relations in my data? All I should have to do is insert facts, as triples. When I ask it a question, I want those facts returned (or maybe other conclusions).
Is a really tweaked Prolog the ultimate DBMS?