[YG Conlang Archives] > [engelang group] > messages [Date Index] [Thread Index] >


[Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: [engelang] Logos Initiative



Jorge Llamb�as jjllambias@hidden.email [engelang], On 02/10/2014 22:56:
On Thu, Oct 2, 2014 at 4:41 AM, And Rosta and.rosta@hidden.email <mailto:and.rosta@hidden.email> [engelang] <engelang@yahoogroups.com <mailto:engelang@yahoogroups.com>> wrote:

      So to actually do reflexives I've used two solutions. One is to use an extra predicate like Lojban {du}, so that "X loves X" gets rendered as "X loves Y & X du Y". This, incidentally, is the strategy I use for so-called 2D-Livagian notation of predicate logic, where the notation can't handle reflexivity.

And in practice I suppose you would hardly ever need to resort to
"du" anyway, because the reflexivised ESAP will normally merge with
an ESAP from another predicate, so you can merge the two ESAPs
independently with that other ESAP (if it comes before)

That's true, tho to reduce the need for counting, I have so far decreed that any vlist item that has already merged with an ESAP of the current predicate is invisible to any subsequent ESAP of the current predicate. This means counting through fewer list items to find the one to merge with.

or use that other ESAP's inflections to merge it with both (if it
comes later).

No, because (in current Livagian) an ESAP in the current predicate may merge with only one vlist item (not counting iterative alternation between additive and intersective mergers).


    The other solution is to use special inflections that augment a stem to reflexivize it; these inflections would augment the inflection that shows the predicate's adicity and how many ESAPs it has. In 2014 Livagian I use this latter method, at least for predicates of maximal tetradic adicity (including event arg, which can't be inflectionally marked as reflexive).

I suppose you would need something like that if you need to show that
two ISAPs are merged.

Yes. Using the du-method for reflexivity, you can't merge ISAPs and you have to merge ESAPs instead. Using the reflexive-inflection solution, the reflexive inflection applies before the inflection that specifies which APs are ESAPs.

    As for additive merger, the particular case of "brothers" is unlikely to arise, because rather than having symmetric predicates like "X is brother of Y", I tend to use the "X joi Y are brothers" method. But something like "mother and child (are well)" (i.e. "X is mother of Y, X joi Y are well") can arise, and iirc it works by making every item on the vlist visible to an additively-merged incoming ESAP.

I was thinking the additive merge of two places from same predicate
would correspond to reciprocals, but I guess not, "mother and child"
is not a full reciprocal. How would it work with the two predicates
in the opposite order? "X are well, Y is mother of Z, X du Y&Z". Can
this order be done without "du"? It seems you would need a third type
of merge: "X are well, (among X) is mother of (among X)".

1. X are well
2. Y is mother of Z
3. Additive merge: Z&Y
4. Intersective merge: (Z&Y)+X

And how would one do full reciprocals, say "mother and child love one
another"? "X is mother of Y, X&Y love X&Y"? Maybe there are
reciprocal inflections as well as reflexive ones?

I haven't worked on reciprocals since the 1990s, but casting my mind back to then, I recall there were a range of reciprocal predicates (for things like "each member of X ... every other member of X", "each member of X ... every member of X", "each member of X broda some other member of X and each member of X se broda some other member of X", and so forth). Roughly speaking they're dyadic predicate with one argument a plural and the other a ka-phrase containing two ce'u (-- Lojbanizing the explication).

But you're right that additive merger offers new possibilities for reciprocals of a sort that neutralize the reflexive--reciprocal distinction.

1. X is mother of Y
2. Additive merge: X&Y
3. Z loves Z's self [= inflectional reflexive]
4. Intersective merge: Z+(X&Y)

1. X is mother of Y
2. Additive merge: X&Y
3. Z loves W
4. Intersective merge: Z+(X&Y)
5. V du U
6. Intersective merge: V+(Z+(X&Y))
7. Intersective merge: U+W

    There are still more complications in that you can get cycles of alternate additive and intersective merger involving the same incoming ESAP. For example, "xorxes joi And discuss" might have a word order "discuss, xorxes, And", such that -- ignoring event arguments -- there is a vlist <Discuss1, xorxes1> and so first And1 additively merges with xorxes1, and then the result, And1&xorxes1 intersectively merges with Discuss1. And if the phrase were "Obama and discussants xorxes and And", with word order "Obama, discuss, xorxes, And", (And1&xorxes1)+Discuss1 would additively merge with Obama1, yielding ((And1&xorxes1)+Discuss1)&Obama1. I currently require that the counting for this iterative process be successive, so that each new merger, of the same ESAP, involves counting back from the vlist position of the previous merger, not from the end of the list all over again. This is the one respect in which predicate order is not free, because predicates must occur in an order
    that matches iterated merging. As I write this paragraph I realize that in principle (tho improbable), you could have two distinct iterated merger sequences involving the same predicates but each requiring the predicates to be in a different sequence -- which means I am going to have to rethink my solution to this bit.

Is there a reason why the counting position can't be reset? (The
extra cost in computation I suppose, but at this point we are way
over budget any way.)

The problem would indeed not arise if for the iterated process the counting were not successive but rather was reset. As you say, the lower computation cost was the reason for making the counting successive, but given that successive counting won't work, there is no alternative but to have the counting reset. And don't go reminding me how overbudget on computation we are, else I'll give up in despair, as I periodically do, instead of working on reducing the computation cost...

--And.