Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. @cda@social.sporiff.dev I'm curious what prompted that?

@cda@social.sporiff.dev I'm curious what prompted that?

Scheduled Pinned Locked Moved Uncategorized
6 Posts 4 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • amy@ice.ch3.stA This user is from outside of this forum
    amy@ice.ch3.stA This user is from outside of this forum
    amy@ice.ch3.st
    wrote last edited by
    #1

    @cda@social.sporiff.dev I'm curious what prompted that? I think the risk of an improper db schema is not like, potential data loss or the db becoming harder to maintain and expand, but rather that the backend codebase will continually accrue technical debt in order to work around poorly defined entities and relationships. If the schema causes friction with the backend API, it should be addressed at the database level as quickly as possible. But also, like, perfectly aligned db schemas don't exist in the wild, like they do in textbooks. We are all imperfect beings with deadlines. Are you designing something from scratch?

    cda@social.sporiff.devC 1 Reply Last reply
    0
    • amy@ice.ch3.stA amy@ice.ch3.st

      @cda@social.sporiff.dev I'm curious what prompted that? I think the risk of an improper db schema is not like, potential data loss or the db becoming harder to maintain and expand, but rather that the backend codebase will continually accrue technical debt in order to work around poorly defined entities and relationships. If the schema causes friction with the backend API, it should be addressed at the database level as quickly as possible. But also, like, perfectly aligned db schemas don't exist in the wild, like they do in textbooks. We are all imperfect beings with deadlines. Are you designing something from scratch?

      cda@social.sporiff.devC This user is from outside of this forum
      cda@social.sporiff.devC This user is from outside of this forum
      cda@social.sporiff.dev
      wrote last edited by
      #2

      @amy I'm designing it from scratch, and as part of that I'm thinking about the fact that there are probably places where I'm going to need to divide data up because their state changes asynchronously. That means I need to have some fundamental understanding of the best way to structure those tables, how to index them, the best way to write queries, etc. etc.

      I know how to do the actual breaking up of everything and how to create indices in my code, but I'm realizing that I don't understand database systems and design well enough to make smart decisions there, you know?

      amy@ice.ch3.stA 1 Reply Last reply
      0
      • cda@social.sporiff.devC cda@social.sporiff.dev

        @amy I'm designing it from scratch, and as part of that I'm thinking about the fact that there are probably places where I'm going to need to divide data up because their state changes asynchronously. That means I need to have some fundamental understanding of the best way to structure those tables, how to index them, the best way to write queries, etc. etc.

        I know how to do the actual breaking up of everything and how to create indices in my code, but I'm realizing that I don't understand database systems and design well enough to make smart decisions there, you know?

        amy@ice.ch3.stA This user is from outside of this forum
        amy@ice.ch3.stA This user is from outside of this forum
        amy@ice.ch3.st
        wrote last edited by
        #3

        @cda@social.sporiff.dev you mentioned that the data might be time-dependent, if I understand what you were saying, and in my opinion, databases ideally shouldn't know where or when they are. Like, if a row should be marked as stale if it was updated more than a week ago, most relational dbs offer a way to schedule a task every night that could set the 'stale' column to true if necessary. But that's not how I would approach it. I would pass the current time in to a query against the last_updated column, so that the stale state of the row is a computed property rather than a time-dependent column. I think a space/time/env-independent schema is always possible to design. Like, I would like to be able to physically pick up the database and move it to a different timezone, send it 10 years in the past or future with my portal, whatever, and the stateless backend should be able to use it still.

        Beyond that, I don't have much high level advice lol. If I'm being honest, I think I'm sensing a little impostor syndrome. I think you have more experience than you think you do, and you're asking the right questions, just try not to get too stuck in the research phase.

        cda@social.sporiff.devC 1 Reply Last reply
        0
        • amy@ice.ch3.stA amy@ice.ch3.st

          @cda@social.sporiff.dev you mentioned that the data might be time-dependent, if I understand what you were saying, and in my opinion, databases ideally shouldn't know where or when they are. Like, if a row should be marked as stale if it was updated more than a week ago, most relational dbs offer a way to schedule a task every night that could set the 'stale' column to true if necessary. But that's not how I would approach it. I would pass the current time in to a query against the last_updated column, so that the stale state of the row is a computed property rather than a time-dependent column. I think a space/time/env-independent schema is always possible to design. Like, I would like to be able to physically pick up the database and move it to a different timezone, send it 10 years in the past or future with my portal, whatever, and the stateless backend should be able to use it still.

          Beyond that, I don't have much high level advice lol. If I'm being honest, I think I'm sensing a little impostor syndrome. I think you have more experience than you think you do, and you're asking the right questions, just try not to get too stuck in the research phase.

          cda@social.sporiff.devC This user is from outside of this forum
          cda@social.sporiff.devC This user is from outside of this forum
          cda@social.sporiff.dev
          wrote last edited by
          #4

          @amy It's not really time-dependent, I probably didn't express that well. It's more like I suspect that parts of the record might need to be updated or expanded separately in future. It's about podcasts, and my suspicion is that if a podcast feed changes, we need a mechanism by which to resolve multiple feed URLs to the same feed entity, which means storing feed URLs separately and referencing the parent record.

          So it's not time so much as it is wanting to have a central resource entity then splitting up any information I know to be variable into separate records. I just want to make sure I'm A) not overcomplicating the design and B) future-proofing it in a scaleable and extensible way. I know enough to realize there might be an issue there, but not enough to know how to design the structure properly, you know?

          salaciouscrumb@infosec.exchangeS 1 Reply Last reply
          0
          • thomas_shone@infosec.exchangeT This user is from outside of this forum
            thomas_shone@infosec.exchangeT This user is from outside of this forum
            thomas_shone@infosec.exchange
            wrote last edited by
            #5

            @cda possibly just to help narrow down what type of book you're looking for:

            Based on the conversations here, you're probably looking specifically for data modelling knowledge. Specifically online transaction processing (OLTP) compared to online analytical processing (OLAP) modelling. I.E.: you're modelling for a running web application rather than for big data analytics.

            It might be an idea to take a look at Database Normalisation (https://en.wikipedia.org/wiki/Database_normalization) and see if this seems close to what you're trying to learn about.

            Otherwise you might get buried in books around performance or tweaking specific database systems which is another whole topic (query performance, index design, table engines, etc).

            1 Reply Last reply
            0
            • cda@social.sporiff.devC cda@social.sporiff.dev

              @amy It's not really time-dependent, I probably didn't express that well. It's more like I suspect that parts of the record might need to be updated or expanded separately in future. It's about podcasts, and my suspicion is that if a podcast feed changes, we need a mechanism by which to resolve multiple feed URLs to the same feed entity, which means storing feed URLs separately and referencing the parent record.

              So it's not time so much as it is wanting to have a central resource entity then splitting up any information I know to be variable into separate records. I just want to make sure I'm A) not overcomplicating the design and B) future-proofing it in a scaleable and extensible way. I know enough to realize there might be an issue there, but not enough to know how to design the structure properly, you know?

              salaciouscrumb@infosec.exchangeS This user is from outside of this forum
              salaciouscrumb@infosec.exchangeS This user is from outside of this forum
              salaciouscrumb@infosec.exchange
              wrote last edited by
              #6

              @cda I have a very similar, but very different challenge. Building an info security database I want to know where my IT systems are and (for example) how many vulnerabilities they have. This sounds easy on day one - have a table with columns for IP, Hostname, and vulnerability count. But those things change regularly. Next week it may change IP and have less vulnerabilities, and the following week, it will have more.

              How do I store all of that so I can graph vulnerabilities over time?

              1 Reply Last reply
              1
              0
              • R relay@relay.infosec.exchange shared this topic
              Reply
              • Reply as topic
              Log in to reply
              • Oldest to Newest
              • Newest to Oldest
              • Most Votes


              • Login

              • Login or register to search.
              • First post
                Last post
              0
              • Categories
              • Recent
              • Tags
              • Popular
              • World
              • Users
              • Groups