database - Optimize Mysql Query (rawdata to processed table) -
hi question this, have file reads in 3000 rows of data local infile command. after there trigger on table that's inserted copies 3 columns from updated table , 2 columns table exist in database already(if unclear mean structures coming). there combinations have unique glnumbers entered processed table. takes on minute , half normally. find pretty long, wondering if normal i'm doing(can't believe that's true) or there way optimize queries goes faster?
tables inserted labeled first 3 letters of each month. here default structure.
rawdata structure
| idjan | glnumber | journel | invoicenumber | date | jt | debit | credit | descriptiondetail | totaldebit | totalcredit |
(sorry) poor format there isn't way seems)
after insert trigger query
delete processedjan; insert processedjan(glnumber,debit,credit,bucket1,bucket2) select a.glnumber, a.totaldebit, a.totalcredit, b.bucket1, b.bucket2 jan inner join bucketinformation b on a.glnumber = b.glnumber group glnumber;
processed datatable structure
| glnumber | bucket1| bucket2| credit | debit |
also guess helps know bucket 1 , bucket 2 come table matched against glnumber. table 800 rows 3 columns glnumber , 2 buckets.
while postgresql has statement level triggers, mysql has row level triggers. mysql reference:
a trigger defined activate when statement inserts, updates, or deletes rows in associated table. these row operations trigger events. example, rows can inserted insert or load data statements, , insert trigger activates each inserted row.
so while managing load 3000 rows in 1 operation, unfortunately 3000 more queries executed triggers. complex nature of transaction sounds might performing 2-3 queries per row. that's real reason slow down.
you can speed things disabling trigger , carrying out insert .. select after load data in file. can automate small script.
Comments
Post a Comment