Hi,
I am trying to use hierarchy in pubsub nodes, but could not find much help anywhere. I need to know what the parent column exactly holds in the pubsub_node table. I created collection nodes but it does not have a parent, am I missing out something.
In pubsub_state table the column subscriptions appears to be a foreign entity, so where can I find its master table.
Also, is the unix like node naming, /home/host/... a practice or convention.
Thanks,
Prasad
when using hometree node
when using hometree node plugin with default nodetree_tree tree plugin, you de facto have a hierarchy like /home/host/user/... this is design choice for hometree plugin.
in this case, the parent column contains node id of parent. example if you create /home/host/user/test, then parent is /home/host/user. each node can contain items or be parent of other child nodes, but a publish to a parent node does not propagate to child nodes. this is the preferred plugin to be used when one need a hierarchy.
you can also choose to use the dag node plugin with nodetree_dag tree plugin to get something with better support of collection and and leaf nodes as it was specified years ago in a xep which is defered now. with this plugin, either leaf nodes contains items, either collection node contains child nodes. you can not publish to a collection node. this plugin is not maintained, but used to work.
prasadv wrote: In
In pubsub_state table the column subscriptions appears to be a foreign entity, so where can I find its master table.
I know this is an off question, but how are you able to view the pubsub_state and pubsusb_node tables in ejabberd? I don't see any pubsub related tables in the webadmin nor am i able to interpret data in var/lib/ejabberd/pubsub_*.DCD/DCL/DAT files.
If i try the ejabberdctl dump_table command like:
ejabberdctl --admin <user> <host> <pwd> dump_table pubsub_subscription.DCD ./ps-sub.txtit fails...
Problem 'exit {aborted,{no_exists,'./ps-sub.txt',record_name}}' occurred executing the command.Stacktrace: [{mnesia,abort,1,[{file,"mnesia.erl"},{line,310}]},
{ejabberd_admin,'-dump_to_textfile/3-fun-0-',1,
[{file,"src/ejabberd_admin.erl"},{line,538}]},
{lists,map,2,[{file,"lists.erl"},{line,1237}]},
{ejabberd_admin,dump_to_textfile,3,
[{file,"src/ejabberd_admin.erl"},{line,537}]},
{ejabberd_admin,dump_tables,2,
[{file,"src/ejabberd_admin.erl"},{line,521}]},
{ejabberd_commands,execute_command2,2,
[{file,"src/ejabberd_commands.erl"},
{line,378}]},
{ejabberd_ctl,call_command,3,
[{file,"src/ejabberd_ctl.erl"},{line,292}]},
{ejabberd_ctl,try_call_command,3,
[{file,"src/ejabberd_ctl.erl"},{line,268}]}]
Any suggestions?
Thanks !!
$ ejabberdctl dump_table
$ ejabberdctl dump_table /tmp/aaa.txt pubsub_subscription $ cat /tmp/aaa.txt {tables,[{pubsub_subscription,[{record_name,pubsub_subscription}, {attributes,[subid,options]}]}]}.You can also try:
$ ejabberdctl debug Erlang/OTP 17 [erts-6.2] [source] [64-bit] [smp:2:2] [async-threads:10] [kernel-poll:true] Eshell V6.2 (abort with ^G) (ejabberd@localhost)1> mnesia:info(). ---> Processes holding locks <--- ---> Processes waiting for locks <--- ---> Participant transactions <--- ---> Coordinator transactions <--- ---> Uncertain transactions <--- ---> Active tables <--- sql_pool : with 0 records occupying 299 words of mem mod_register_ip: with 0 records occupying 299 words of mem local_config : with 28 records occupying 1842 words of mem caps_features : with 6 records occupying 11658 bytes on disc access : with 30 records occupying 899 words of mem acl : with 10 records occupying 587 words of mem shaper : with 2 records occupying 321 words of mem carboncopy : with 0 records occupying 299 words of mem last_activity : with 3 records occupying 362 words of mem pubsub_index : with 1 records occupying 308 words of mem vcard_search : with 2 records occupying 499 words of mem http_bind : with 0 records occupying 299 words of mem archive_prefs : with 0 records occupying 5464 bytes on disc motd_users : with 0 records occupying 299 words of mem reg_users_counter: with 1 records occupying 313 words of mem sr_user : with 0 records occupying 299 words of mem schema : with 45 records occupying 5774 words of mem pubsub_subscription: with 0 records occupying 299 words of mem roster_version : with 0 records occupying 299 words of mem session : with 0 records occupying 299 words of mem pubsub_last_item: with 0 records occupying 299 words of mem offline_msg : with 2 records occupying 6695 bytes on disc archive_msg : with 4 records occupying 12108 bytes on disc route : with 5 records occupying 373 words of mem private_storage: with 1 records occupying 5982 bytes on disc motd : with 0 records occupying 299 words of mem sr_group : with 0 records occupying 299 words of mem oauth_token : with 0 records occupying 299 words of mem pubsub_item : with 2 records occupying 6312 bytes on disc muc_room : with 10 records occupying 1032 words of mem privacy : with 2 records occupying 741 words of mem pubsub_state : with 4 records occupying 205 words of mem iq_response : with 0 records occupying 299 words of mem passwd : with 8 records occupying 471 words of mem temporarily_blocked: with 0 records occupying 299 words of mem muc_registered : with 0 records occupying 299 words of mem s2s : with 0 records occupying 299 words of mem multicastc : with 0 records occupying 299 words of mem session_counter: with 0 records occupying 299 words of mem irc_custom : with 0 records occupying 299 words of mem muc_online_room: with 8 records occupying 451 words of mem pubsub_node : with 4 records occupying 803 words of mem vcard : with 2 records occupying 6200 bytes on disc route_multicast: with 0 records occupying 299 words of mem roster : with 4 records occupying 568 words of mem ===> System info in version "4.12.3", debug level = none <=== opt_disc. Directory "/var/lib/ejabberd" is used. use fallback at restart = false running db nodes = [ejabberd@localhost] stopped db nodes = [] master node tables = [] remote = [] ram_copies = [access,acl,carboncopy,http_bind,iq_response, local_config,mod_register_ip,muc_online_room,multicastc, pubsub_last_item,reg_users_counter,route, route_multicast,s2s,session,session_counter,shaper, sql_pool,temporarily_blocked] disc_copies = [irc_custom,last_activity,motd,motd_users,muc_registered, muc_room,oauth_token,passwd,privacy,pubsub_index, pubsub_node,pubsub_state,pubsub_subscription,roster, roster_version,schema,sr_group,sr_user,vcard_search] disc_only_copies = [archive_msg,archive_prefs,caps_features,offline_msg, private_storage,pubsub_item,vcard] [{ejabberd@localhost,disc_copies}] = [roster,pubsub_node,irc_custom, muc_registered,passwd,pubsub_state, privacy,muc_room,oauth_token,sr_group, motd,roster_version,pubsub_subscription, schema,sr_user,motd_users,vcard_search, pubsub_index,last_activity] [{ejabberd@localhost,disc_only_copies}] = [vcard,pubsub_item,caps_features, private_storage,archive_msg, offline_msg,archive_prefs] [{ejabberd@localhost,ram_copies}] = [shaper,route_multicast,muc_online_room, mod_register_ip,session_counter, multicastc,s2s,temporarily_blocked, iq_response,local_config,acl,access, route,pubsub_last_item,session, reg_users_counter,http_bind,carboncopy, sql_pool] 62 transactions committed, 83 aborted, 0 restarted, 0 logged to disc 0 held locks, 0 in queue; 0 local transactions, 0 remote 0 transactions waits for other nodes: [] ok