MongoDB nested query using aggregate function -
i have collection "superpack", has nested objects. sample document looks below.
{ "_id" : objectid("56038c8cca689261baca93eb"), "name": "test sub", "packs": [ { "id": "55fbc7f6b0ce97a309b3cead", "name": "classic", "packdispval": "pack", "billingpts": [ { "id": "55fbc7f6b0ce97a309b3ceab", "name": "classic 1 month", "expiryval": 1, "amount": 20, "topups": [ { "id": "55fbc7f6b0ce97a309b3cea9", "name": "1 extra", "amount": 8 }, { "id": "55fbc7f6b0ce97a309b3ceaa", "name": "2 extra", "amount": 12 } ] }, { "id": "55fbc7f6b0ce97a309b3ceac", "name": "classic 2 month", "expiryval": 1, "amount": 30, "topups": [ { "id": "55fbc7f6b0ce97a309b3cea8", "name": "3 extra", "amount": 16 } ] } ] } ] }
i need query nested object topups id field , result should have selected topup object , associated parent. expecting output below, when query on topup id 55fbc7f6b0ce97a309b3cea9.
{ "_id" : objectid("56038c8cca689261baca93eb"), "name": "test sub", "packs": [ { "id": "55fbc7f6b0ce97a309b3cead", "name": "classic", "packdispval": "pack", "billingpts": [ { "id": "55fbc7f6b0ce97a309b3ceab", "name": "classic 1 month", "expiryval": 1, "amount": 20, "topups": [ { "id": "55fbc7f6b0ce97a309b3cea9", "name": "1 extra", "amount": 8 } ] } ] } ] }
i tried below aggregate query same. not returning result. can please me, wrong in query?
db.superpack.aggregate( [{ $match: { "id": "55fbc7f6b0ce97a309b3cea9" } }, { $redact: {$cond: { if: { $eq: [ "$id", "55fbc7f6b0ce97a309b3cea9" ] }, "then": "$$keep", else: "$$prune" }}} ])
unfortunately $redact
not viable option here based on fact recursive $$descend
looking field called "id" @ levels of document. cannot possibly ask @ specific level of embedding it's or nothing.
this means need alternate methods of filtering content rather $redact
. "id" values unique no problem filtering via "set" operations.
so efficient way via following:
db.docs.aggregate([ { "$match": { "packs.billingpts.topups.id": "55fbc7f6b0ce97a309b3cea9" }}, { "$project": { "packs": { "$setdifference": [ { "$map": { "input": "$packs", "as": "pack", "in": { "$let": { "vars": { "billingpts": { "$setdifference": [ { "$map": { "input": "$$pack.billingpts", "as": "billing", "in": { "$let": { "vars": { "topups": { "$setdifference": [ { "$map": { "input": "$$billing.topups", "as": "topup", "in": { "$cond": [ { "$eq": [ "$$topup.id", "55fbc7f6b0ce97a309b3cea9" ] }, "$$topup", false ] } }}, [false] ] } }, "in": { "$cond": [ { "$ne": [{ "$size": "$$topups"}, 0] }, { "id": "$$billing.id", "name": "$$billing.name", "expiryval": "$$billing.expiryval", "amount": "$$billing.amount", "topups": "$$topups" }, false ] } } } }}, [false] ] } }, "in": { "$cond": [ { "$ne": [{ "$size": "$$billingpts"}, 0 ] }, { "id": "$$pack.id", "name": "$$pack.name", "packdispval": "$$pack.packdispval", "billingpts": "$$billingpts" }, false ] } } } }}, [false] ] } }} ])
where after digging down innermost array being filtered, size of each resulting array going outwards tested see if zero, , omitted results is.
it's long listing efficient way since each array filtered down first , within each document.
a not efficient way pull apart $unwind
, $group
results:
db.docs.aggregate([ { "$match": { "packs.billingpts.topups.id": "55fbc7f6b0ce97a309b3cea9" }}, { "$unwind": "$packs" }, { "$unwind": "$packs.billingpts" }, { "$unwind": "$packs.billingpts.topups"}, { "$match": { "packs.billingpts.topups.id": "55fbc7f6b0ce97a309b3cea9" }}, { "$group": { "_id": { "_id": "$_id", "packs": { "id": "$packs.id", "name": "$packs.name", "packdispval": "$packs.packdispval", "billingpts": { "id": "$packs.billingpts.id", "name": "$packs.billingpts.name", "expiryval": "$packs.billingpts.expiryval", "amount": "$packs.billingpts.amount" } } }, "topups": { "$push": "$packs.billingpts.topups" } }}, { "$group": { "_id": { "_id": "$_id._id", "packs": { "id": "$_id.packs.id", "name": "$_id.packs.name", "packdispval": "$_id.packs.packdispval" } }, "billingpts": { "$push": { "id": "$_id.packs.billingpts.id", "name": "$_id.packs.billingpts.name", "expiryval": "$_id.packs.billingpts.expiryval", "amount": "$_id.packs.billingpts.amount", "topups": "$topups" } } }}, { "$group": { "_id": "$_id._id", "packs": { "$push": { "id": "$_id.packs.id", "name": "$_id.packs.name", "packdispval": "$_id.packs.packdispval", "billingpts": "$billingpts" } } }} ])
the listing looks lot more simple of course there lot of overhead introduced $unwind
here. process of grouping keeping copy of outside of current array level being reconstructed, , push content array in next stage, until root _id
.
please note unless intend such search match more 1 document or if going have significant gains reduced network traffic reducing down response size large document, advised neither of these follow of same design first pipeline example in client code.
whilst first example still okay performance wise, it's still mouthful send server , general listing, typically written same operations in cleaner way in client code process , filter resulting structure.
{ "_id" : objectid("56038c8cca689261baca93eb"), "packs" : [ { "id" : "55fbc7f6b0ce97a309b3cead", "name" : "classic", "packdispval" : "pack", "billingpts" : [ { "id" : "55fbc7f6b0ce97a309b3ceab", "name" : "classic 1 month", "expiryval" : 1, "amount" : 20, "topups" : [ { "id" : "55fbc7f6b0ce97a309b3cea9", "name" : "1 extra", "amount" : 8 } ] } ] } ] }
Comments
Post a Comment