mirror of
https://github.com/doctrine/orm.git
synced 2026-03-24 06:52:09 +01:00
Mass JSON encoding => mix iterate() and resultAsArray() #5288
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @cyrilchapon on GitHub (Oct 7, 2016).
Originally assigned to: @Ocramius on GitHub.
I've got a huge table and I just want to get this as json.
I'm able to "process the whole data table as an array"
This takes 4 seconds for 15k records
And to "process each object 1 by 1 cleaning doctrine everytime"
(though, this last solution leads me to manually encoding to json every row)
This takes 15 seconds for 15k records (!) even if I
detach($current)and evenclear()entity manager.How would I "process each record 1 by 1 as an array" ?
I tried
$data = $q->iterate(Query::HYDRATE_ARRAY);which gave me an error@Ocramius commented on GitHub (Oct 7, 2016):
You'd probably use SQL for that. If speed is necessary, the ORM is NOT designed to get you there fast, when it comes to batch processing.
Closing as
can't fix.@cyrilchapon commented on GitHub (Oct 7, 2016):
@ocramius thanks for your answer.
Even if I can see that indeed the ORM was not designed with speed mind, I don't think in 2016 web can call simple SQL select to json serialisation of 15000 rows without any joint a "batch processing"...
@Ocramius commented on GitHub (Oct 7, 2016):
@cyrilchapon it really depends on the amount of data, queries, etc.
We have performance tests that hydrate/persist 10k records within .2 seconds, so if you think there is a performance problem somewhere, there needs to be some sort of benchmark exposing it, or some profiling exposing a particularly slow piece of the logic.
If you have large JSON structures, then most overhead will end up being in
json_decode(), which we don't have control on.If your entities implement
JsonSerializable, then you may have an issue in that logic.It's not as simple as
" I don't think in 2016 web can call simple SQL select to json serialisation of 15000 rows", there needs to be some testing and some reproducible issue.That said, 15000 rows loaded into memory is way out of scope for this ORM: wrong tool for the job already.